DEV Community: John Patrick DandisonThe latest articles on DEV Community by John Patrick Dandison (@jpda).
https://dev.to/jpda
https://media2.dev.to/dynamic/image/width=90,height=90,fit=cover,gravity=auto,format=auto/https:%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F157593%2F1f059c10-4714-46e0-a299-fe6634f65f20.JPGDEV Community: John Patrick Dandison
https://dev.to/jpda
enDarkSky to WeatherKit: from API keys to signed JWTsJohn Patrick DandisonThu, 20 Apr 2023 23:25:02 +0000
https://dev.to/jpda/darksky-to-weatherkit-from-api-keys-to-signed-jwts-269j
https://dev.to/jpda/darksky-to-weatherkit-from-api-keys-to-signed-jwts-269j<p><iframe width="710" height="399" src="proxy.php?url=https://www.youtube.com/embed/oo5fg0EfoCY?start=300">
</iframe>
</p>
<p>As of March 31st, the Dark Sky API is no more, replaced by Apple's <a href="proxy.php?url=https://developer.apple.com/weatherkit/">WeatherKit</a>. I've used the Dark Sky API as it had a generous free-tier, was easy to use and flexible enough to customize. </p>
<p>Remember a few years ago when customized terminal prompts were all the rage? Pepperidge Farm remembers - especially people like me who probably went a bit overboard. </p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--HQuLhEIV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5jue3d8fp25wbzu4z4hh.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--HQuLhEIV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5jue3d8fp25wbzu4z4hh.png" alt="A terminal prompt with too many segments, like weather and stock price" width="800" height="128"></a></p>
<p>Note the weather & stock price. Stock price comes from Yahoo (and follows the same pattern as the weather data), weather data came from Dark Sky. The prompt would read from a cache file.<br>
</p>
<div class="highlight js-code-highlight">
<pre class="highlight shell"><code><span class="nv">weatherData</span><span class="o">=</span><span class="si">$(</span>curl <span class="nt">-s</span> <span class="s2">"https://api.darksky.net/forecast/</span><span class="k">${</span><span class="nv">API_KEY</span><span class="k">}</span><span class="s2">/</span><span class="k">${</span><span class="nv">LATLON</span><span class="k">}</span><span class="s2">?exclude=minutely,hourly,daily,alerts,flags"</span><span class="si">)</span>
<span class="nb">echo</span> <span class="nv">$weatherData</span> <span class="o">></span> <span class="nv">$weatherCacheFile</span>
</code></pre>
</div>
<p>API key was a static string - could be regenerated on-demand. To run on a trusted machine (like my own), it was no sweat and easy to use. Of course I wouldn't want to distribute that key, but it was for my use locally. It was <em>especially</em> easy to use in shell scripting. </p>
<h2>
Enter WeatherKit
</h2>
<p>Apple pushed the Dark Sky cutoff date a few times, of course, I missed it and wondered why I was getting 301s and parse error for weather info. On March 31st, the API was dead and I'd have to migrate over to WeatherKit. </p>
<p>At first glance I figured it'd be pretty similar - this is weather data, after all - but was surprised to see that Apple's only <a href="proxy.php?url=https://developer.apple.com/documentation/weatherkitrestapi/request_authentication_for_weatherkit_rest_api">accepted authorization scheme</a> for the API was a bearer token, an EC-signed, asymmetric JWT! Needless to say I was surprised</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--zpN9lZeq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/flwq3m0mzcnjmnje5ds1.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--zpN9lZeq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/flwq3m0mzcnjmnje5ds1.png" alt="Surprised Pikachu meme" width="800" height="696"></a></p>
<p>Here's an example:<br>
</p>
<div class="highlight js-code-highlight">
<pre class="highlight json"><code><span class="p">{</span><span class="w">
</span><span class="nl">"kid"</span><span class="p">:</span><span class="w"> </span><span class="s2">"<KEY ID>"</span><span class="p">,</span><span class="w">
</span><span class="nl">"id"</span><span class="p">:</span><span class="w"> </span><span class="s2">"<TEAM ID>.<SERVICE ID>"</span><span class="p">,</span><span class="w">
</span><span class="nl">"typ"</span><span class="p">:</span><span class="w"> </span><span class="s2">"JWT"</span><span class="p">,</span><span class="w">
</span><span class="nl">"alg"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ES256"</span><span class="w">
</span><span class="p">}</span><span class="err">,</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nl">"iss"</span><span class="p">:</span><span class="w"> </span><span class="s2">"<TEAM ID>"</span><span class="p">,</span><span class="w">
</span><span class="nl">"iat"</span><span class="p">:</span><span class="w"> </span><span class="mi">1682008228</span><span class="p">,</span><span class="w">
</span><span class="nl">"exp"</span><span class="p">:</span><span class="w"> </span><span class="mi">1682011828</span><span class="p">,</span><span class="w">
</span><span class="nl">"sub"</span><span class="p">:</span><span class="w"> </span><span class="s2">"<SERVICE ID>"</span><span class="w">
</span><span class="p">}</span><span class="err">.<signature></span><span class="w">
</span></code></pre>
</div>
<h2>
Get all of your pieces
</h2>
<p>Sadly, this is not free - unlike the Dark Sky API, this requires an Apple Dev membership. While the WeatherKit API itself has a generous starting tier (500k calls/month) that doesn't have any <em>additional</em> cost, you'll still be out the $99 for the annual membership. Students and non-profits can get it cheaper, so ask around. </p>
<p>Apple's docs cover most of what you'll need:</p>
<ul>
<li>your Team ID</li>
<li>the <code>.p8</code> file that contains your key</li>
<li>the key ID</li>
<li>the service ID</li>
<li>lat/lon coordinates of a location</li>
</ul>
<p>The Service ID is a string (like <code>dev.jpda.terminal-weather</code>) of your choosing. </p>
<h2>
Testing with Postman's excellent JWT signer
</h2>
<p>To start, Postman has an excellent JWT Bearer auth type that includes an ES256 signer, which makes it much easier to get familiar with the API. I find creating a collection to make this easier (as the authorization settings can be set at the collection level instead of per-request). Here we can configure our token. Choose JWT Bearer as the type, <code>ES256</code> as the algorithm, get your private key from the .p8 file from Apple. </p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--NJteNsi_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/26a1c21ylrkqxfw66d84.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--NJteNsi_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/26a1c21ylrkqxfw66d84.png" alt="Postman authorization screen" width="800" height="753"></a></p>
<h3>
JWT Payload
</h3>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--kT_J2UAD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9x42fikmcm1qbyzi8nta.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--kT_J2UAD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9x42fikmcm1qbyzi8nta.png" alt="JWT payload screen in postman authz" width="800" height="249"></a></p>
<p>The payload is where we'll add our specific claims - there are four we need: <code>iss</code>,<code>iat</code> (issued at), <code>exp</code> (expires on) and <code>sub</code>ject (who/what the token is about).</p>
<ul>
<li>iss: your team ID</li>
<li>sub: your service ID</li>
</ul>
<p>The two timestamps we'll have to sort out. Postman has a <code>$timestamp</code> global that gives us a unix epoch time (seconds since 1/1/1970) making <code>iat</code> easy. <code>exp</code> we'll need calculate off of <code>iat</code>, which we can do via Pre-request scripts.</p>
<p>I've left my information in below as an example:<br>
</p>
<div class="highlight js-code-highlight">
<pre class="highlight json"><code><span class="p">{</span><span class="w">
</span><span class="nl">"iss"</span><span class="p">:</span><span class="s2">"6RT85M24SE"</span><span class="p">,</span><span class="w">
</span><span class="nl">"iat"</span><span class="p">:</span><span class="w"> </span><span class="p">{{</span><span class="err">$timestamp</span><span class="p">}},</span><span class="w">
</span><span class="nl">"exp"</span><span class="p">:</span><span class="w"> </span><span class="p">{{</span><span class="err">exp_timestamp</span><span class="p">}},</span><span class="w">
</span><span class="nl">"sub"</span><span class="p">:</span><span class="s2">"dev.jpda.terminal-weather"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre>
</div>
<p>Pay particular attention to the timestamps - they should <em>not</em> be in quotations, as they are longs. This will complain because <code>exp_timestamp</code> is not defined, which is OK, we're going to go define it next. </p>
<h3>
Variables & pre-request script
</h3>
<p>We'll need at least one variable to hold the calculated expiration value. I put my coordinates in variables called <code>HOME_LAT</code> and <code>HOME_LON</code> since they will be available to the whole collection.</p>
<p>Add a variable called <code>exp_timestamp</code> - it doesn't need an initial value.</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s---zv6YhMD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2vpa3umogusl15r9bq9h.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s---zv6YhMD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2vpa3umogusl15r9bq9h.png" alt="Variables screen" width="800" height="406"></a></p>
<p>Lastly head into your pre-request scripts tab. Here we can calculate the expiration timestamp like so:<br>
</p>
<div class="highlight js-code-highlight">
<pre class="highlight javascript"><code><span class="kd">var</span> <span class="nx">now</span> <span class="o">=</span> <span class="nx">pm</span><span class="p">.</span><span class="nx">variables</span><span class="p">.</span><span class="nx">replaceIn</span><span class="p">(</span><span class="dl">"</span><span class="s2">{{$timestamp}}</span><span class="dl">"</span><span class="p">)</span><span class="o">*</span><span class="mi">1</span><span class="p">;</span>
<span class="kd">var</span> <span class="nx">exp</span> <span class="o">=</span> <span class="nx">now</span> <span class="o">+</span> <span class="mi">3600</span><span class="p">;</span> <span class="c1">// adds +1 hour</span>
<span class="nx">pm</span><span class="p">.</span><span class="nx">collectionVariables</span><span class="p">.</span><span class="kd">set</span><span class="p">(</span><span class="dl">"</span><span class="s2">exp_timestamp</span><span class="dl">"</span><span class="p">,</span> <span class="nx">exp</span><span class="p">);</span>
</code></pre>
</div>
<p>Because of how Postman's variables work, <code>replaceIn</code> is required to use the global $timestamp variable.</p>
<h2>
JWT Header
</h2>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--w36bWIjb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5hyvzzf7mikuue4negeh.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--w36bWIjb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5hyvzzf7mikuue4negeh.png" alt="JWT Header screen in postman authz" width="800" height="437"></a></p>
<p>Back in the Authorization tab, we have one more item to manage - we need to add two fields to the JWT's header - the <code>kid</code> (key ID) and <code>id</code> - the team ID + service ID of your app. This is under 'advanced configuration.' We can add our claims and postman will handle adding claims like <code>alg</code> for the algorithm and <code>type</code>.</p>
<p>I've left my values in as an example:<br>
</p>
<div class="highlight js-code-highlight">
<pre class="highlight json"><code><span class="p">{</span><span class="w">
</span><span class="nl">"kid"</span><span class="p">:</span><span class="s2">"PMV7JU5DJJ"</span><span class="p">,</span><span class="w">
</span><span class="nl">"id"</span><span class="p">:</span><span class="s2">"6RT85M24SE.dev.jpda.terminal-weather"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre>
</div>
<h2>
Ready! Let's get the weather...! Right?
</h2>
<p>OK, at long last we are ready to get the temperature! Well, not quite. First we need to see what datasets are available for your location. </p>
<p>Get a new request (making sure to save it/add it to your collection we just configured for authorization) and drop in this url:<br>
</p>
<div class="highlight js-code-highlight">
<pre class="highlight plaintext"><code>https://weatherkit.apple.com/api/v1/availability/{{HOME_LAT}}/{{HOME_LON}}?country=US
</code></pre>
</div>
<p>Make sure your authorization tab is configured for 'inherit from parent' and you have variables set for <code>{{HOME_LAT}}</code> and <code>{{HOME_LON}}</code> and update your country if necessary. Run this and, if all went well, we should get a response with an array of the available datasets for that location.<br>
</p>
<div class="highlight js-code-highlight">
<pre class="highlight json"><code><span class="p">[</span><span class="w">
</span><span class="s2">"currentWeather"</span><span class="p">,</span><span class="w">
</span><span class="s2">"forecastDaily"</span><span class="p">,</span><span class="w">
</span><span class="s2">"forecastHourly"</span><span class="p">,</span><span class="w">
</span><span class="s2">"forecastNextHour"</span><span class="p">,</span><span class="w">
</span><span class="s2">"weatherAlerts"</span><span class="w">
</span><span class="p">]</span><span class="w">
</span></code></pre>
</div>
<p>If you get an error, take the generated JWT (from the headers tab of the request) and drop it in your favorite JWT decoder (like jwt.io) and check the contents. </p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--7y4kWAgG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ot4j3gzof5paq67eytct.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--7y4kWAgG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ot4j3gzof5paq67eytct.png" alt="get authz header" width="800" height="202"></a></p>
<p>Note that these may be out-of-date, to get the specific token used in the failed request, look in the Postman console (View --> Show Console). </p>
<h2>
Ok so <em>NOW</em> we can get the weather?
</h2>
<p>At long last, now that we know what's available, <code>currentWeather</code> should give us what we're looking for. Create a new request in Postman in this collection and let's apply what we've determined:<br>
</p>
<div class="highlight js-code-highlight">
<pre class="highlight plaintext"><code>https://weatherkit.apple.com/api/v1/weather/en/{{HOME_LAT}}/{{HOME_LON}}?dataSets=currentWeather&country=US
</code></pre>
</div>
<p>Note the <code>/en/</code> in the path - use the ISO country code for your region. I am only interested in current weather, so that's the dataset I'll request.<br>
</p>
<div class="highlight js-code-highlight">
<pre class="highlight json"><code><span class="p">{</span><span class="w">
</span><span class="nl">"currentWeather"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"CurrentWeather"</span><span class="p">,</span><span class="w">
</span><span class="nl">"metadata"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nl">"attributionURL"</span><span class="p">:</span><span class="w"> </span><span class="s2">"https://weatherkit.apple.com/legal-attribution.html"</span><span class="p">,</span><span class="w">
</span><span class="nl">"expireTime"</span><span class="p">:</span><span class="w"> </span><span class="s2">"2023-04-20T21:20:01Z"</span><span class="p">,</span><span class="w">
</span><span class="nl">"latitude"</span><span class="p">:</span><span class="w"> </span><span class="mf">42.860</span><span class="p">,</span><span class="w">
</span><span class="nl">"longitude"</span><span class="p">:</span><span class="w"> </span><span class="mf">-82.590</span><span class="p">,</span><span class="w">
</span><span class="nl">"readTime"</span><span class="p">:</span><span class="w"> </span><span class="s2">"2023-04-20T21:15:01Z"</span><span class="p">,</span><span class="w">
</span><span class="nl">"reportedTime"</span><span class="p">:</span><span class="w"> </span><span class="s2">"2023-04-20T20:02:16Z"</span><span class="p">,</span><span class="w">
</span><span class="nl">"units"</span><span class="p">:</span><span class="w"> </span><span class="s2">"m"</span><span class="p">,</span><span class="w">
</span><span class="nl">"version"</span><span class="p">:</span><span class="w"> </span><span class="mi">1</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="nl">"asOf"</span><span class="p">:</span><span class="w"> </span><span class="s2">"2023-04-20T21:15:01Z"</span><span class="p">,</span><span class="w">
</span><span class="nl">"cloudCover"</span><span class="p">:</span><span class="w"> </span><span class="mf">0.65</span><span class="p">,</span><span class="w">
</span><span class="nl">"cloudCoverLowAltPct"</span><span class="p">:</span><span class="w"> </span><span class="mf">0.03</span><span class="p">,</span><span class="w">
</span><span class="nl">"cloudCoverMidAltPct"</span><span class="p">:</span><span class="w"> </span><span class="mf">0.66</span><span class="p">,</span><span class="w">
</span><span class="nl">"cloudCoverHighAltPct"</span><span class="p">:</span><span class="w"> </span><span class="mf">0.34</span><span class="p">,</span><span class="w">
</span><span class="nl">"conditionCode"</span><span class="p">:</span><span class="w"> </span><span class="s2">"MostlyCloudy"</span><span class="p">,</span><span class="w">
</span><span class="nl">"daylight"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
</span><span class="nl">"humidity"</span><span class="p">:</span><span class="w"> </span><span class="mf">0.55</span><span class="p">,</span><span class="w">
</span><span class="nl">"precipitationIntensity"</span><span class="p">:</span><span class="w"> </span><span class="mf">0.0</span><span class="p">,</span><span class="w">
</span><span class="nl">"pressure"</span><span class="p">:</span><span class="w"> </span><span class="mf">1013.66</span><span class="p">,</span><span class="w">
</span><span class="nl">"pressureTrend"</span><span class="p">:</span><span class="w"> </span><span class="s2">"falling"</span><span class="p">,</span><span class="w">
</span><span class="nl">"temperature"</span><span class="p">:</span><span class="w"> </span><span class="mf">18.98</span><span class="p">,</span><span class="w">
</span><span class="nl">"temperatureApparent"</span><span class="p">:</span><span class="w"> </span><span class="mf">18.33</span><span class="p">,</span><span class="w">
</span><span class="nl">"temperatureDewPoint"</span><span class="p">:</span><span class="w"> </span><span class="mf">9.65</span><span class="p">,</span><span class="w">
</span><span class="nl">"uvIndex"</span><span class="p">:</span><span class="w"> </span><span class="mi">2</span><span class="p">,</span><span class="w">
</span><span class="nl">"visibility"</span><span class="p">:</span><span class="w"> </span><span class="mf">30544.32</span><span class="p">,</span><span class="w">
</span><span class="nl">"windDirection"</span><span class="p">:</span><span class="w"> </span><span class="mi">147</span><span class="p">,</span><span class="w">
</span><span class="nl">"windGust"</span><span class="p">:</span><span class="w"> </span><span class="mf">28.80</span><span class="p">,</span><span class="w">
</span><span class="nl">"windSpeed"</span><span class="p">:</span><span class="w"> </span><span class="mf">16.96</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre>
</div>
<p>Success! It's all in metric units, and the docs seem to indicate that's what it is and you'll like it. So a bit of work to convert from Cs to Fs (or Ks if you're adventurous).</p>
<h2>
Now what?
</h2>
<p>Now that we can at least <em>get</em> the data again, how are we going to leverage it? Remember I was using this in my terminal - using <code>curl</code> to go fetch the data with an environment variable with the API key. What's best here?</p>
<p>Whatever we use is going to need to access the private key - meaning whatever is generating those JWTs will need to be trusted - local, remote, etc. </p>
<h3>
Local executable
</h3>
<p>This is pretty straightforward, could easily balance JWT lifetime with overhead. Perhaps once-per-day with a 24-hour JWT? I wouldn't normally advocate for long-lived tokens but as it's locally generated, and it's relatively low-risk data, it seems like a decent tradeoff. </p>
<h3>
Remote
</h3>
<p>Something remote sounds neat too - like a Lambda or Azure Function - but once we leave our trusted space, we'll have to figure out how only the correct clients can get tokens to call the weather service. Something like AAD, Okta, Auth0, etc to secure the API to get JWTs. Which, at that point, maybe is easier to just let the function/etc get the weather data itself <em>and cache it</em>, further reducing the number of paid API calls to WeatherKit. Something like:</p>
<p>Client --> (secured via cloud auth service) http proxy API (keeps Apple private key + generates JWTs) --> WeatherKit</p>
<p>This is, in fact, a lot like the BFF pattern (backends for frontends) used for things like storing secrets securely, caching API responses, etc. </p>
<p>Remote has tradeoffs, of course, like hosting, downtime, etc. And it's one more thing to break :D</p>
<h2>
Next, we'll pick!
</h2>
<p><a href="proxy.php?url=https://aka.wtf/twitch">Join me on Twitch</a> and we'll decide what's next!</p>
weatherkitprogrammingcsharpdotnet425show @ Build 2021John Patrick DandisonTue, 25 May 2021 21:35:06 +0000
https://dev.to/425show/425show-build-2021-2a7e
https://dev.to/425show/425show-build-2021-2a7e<p>Thanks for joining us at Build! We'll be updating this post throughout the week with code samples, docs and other info after our sessions.</p>
<h2>
Learn Live: Application types in Microsoft Identity (5/25, 3p ET)
</h2>
<p><a href="proxy.php?url=https://mybuild.microsoft.com/sessions/9eadeef5-96a2-4fd2-ac9a-2a83deed93df">This session</a> kicks off a 5-week series every Monday through June, where Christos & I will take you through the <a href="proxy.php?url=https://docs.microsoft.com/en-us/learn/paths/m365-identity-associate/">Implement Microsoft identity - Associate</a> learning path. Join us each Monday from June 7th - July 12th as we'll go into the why behind the how.</p>
<p>Find the code samples from this session, complete with CodeTour!</p>
<ul>
<li><a href="proxy.php?url=https://github.com/425show/learn-live-spa">Single-page applications</a></li>
<li><a href="proxy.php?url=https://github.com/425show/learn-live-webapp">Web app that signs in users and calls APIs</a></li>
<li><a href="proxy.php?url=https://github.com/425show/learn-live-daemon">Headless/daemon apps</a></li>
</ul>
<p>Stay tuned, more to come throughout the week! <a href="proxy.php?url=https://aka.ms/425show/discord/join">Join us on discord</a></p>
buildjavascriptsecurityQuick hits: Azure AD B2C with the MSAL v2 Angular alphaJohn Patrick DandisonTue, 01 Dec 2020 18:02:02 +0000
https://dev.to/425show/quick-hits-azure-ad-b2c-with-the-msal-v2-angular-alpha-4kpc
https://dev.to/425show/quick-hits-azure-ad-b2c-with-the-msal-v2-angular-alpha-4kpc<p>We get a lot of interesting questions over here at 425 HQ. Today I have a friend working with a customer doing angular - they're interested in using the msal-angular v2 alpha to use auth_code with PKCE instead of implicit. There is a sample app <a href="proxy.php?url=https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-angular-v2-samples/angular10-browser-sample">here</a>, although that app is for regular Azure AD, not b2c. While the library usage is largely the same between AAD and B2C, the configuration is not. </p>
<p>To use the alpha with the sample app with b2c, make sure to add an authority that includes your b2c policy/user flow ID, in addition to adding that in the <code>knownAuthorities</code> block. For example, in <code>app.module.ts</code>:<br>
</p>
<div class="highlight js-code-highlight">
<pre class="highlight plaintext"><code>function MSALInstanceFactory(): IPublicClientApplication {
return new PublicClientApplication({
auth: {
clientId: 'your-app-id-guid',
redirectUri: 'http://localhost:4200',
authority: 'https://your-b2c-tenant.b2clogin.com/your-b2c-tenant.onmicrosoft.com/B2C_1_your_user_flow'
knownAuthorities: [
'https://your-b2c-tenant.b2clogin.com/your-b2c-tenant.onmicrosoft.com/B2C_1_your_user_flow',
'https://your-b2c-tenant.b2clogin.com/your-b2c-tenant.onmicrosoft.com/B2C_1_another_flow'
]
}
});
}
</code></pre>
</div>
<p>You may also need to add a dummy scope to your app registration - earlier versions of msal-browser expected an access_token but one was not provided by default with b2c if there were no scopes outside of openid.</p>
azureidentityb2cJust what *is* the /.default scope in the Microsoft identity platform & Azure AD?John Patrick DandisonWed, 30 Sep 2020 15:56:10 +0000
https://dev.to/425show/just-what-is-the-default-scope-in-the-microsoft-identity-platform-azure-ad-2o4d
https://dev.to/425show/just-what-is-the-default-scope-in-the-microsoft-identity-platform-azure-ad-2o4d<p><iframe width="710" height="399" src="proxy.php?url=https://www.youtube.com/embed/BBQWSksJ7nY?start=2666">
</iframe>
</p>
<p><em>We talked about this in our last community hours. Check out the video above!</em></p>
<p>If you’ve ever worked with the Microsoft identity platform (aka Azure AD, aka Azure AD B2C), there is a good chance that you have had to work with scopes, including the <code>/.default</code> scope. In this blog post, we’re going to cover some of the basics and explain what the <code>/.default</code> scope is, when to use it and why.</p>
<h2>
Quick background
</h2>
<p>When we need to connect to APIs or services secured with OAuth2 (called <em>resources</em> in openid and oauth parlance), such as the <a href="proxy.php?url=http://aka.ms/ge" rel="noopener noreferrer">Microsoft Graph API</a>, Azure APIs, third-party APIs or our own APIs, we need to request an access token for that resource. For Microsoft services, the vast majority are secured with Azure AD, meaning you'll need to get tokens from Azure AD in order to use those services from your applications.</p>
<blockquote>
<p>Sometimes OAuth2 is referred to as 'Bearer Authentication' - this is due to how we use the token with a resource: it's put into the <code>Authorization</code> HTTP header with a value of <code>Bearer <access_token value></code></p>
</blockquote>
<p>When users sign-in to your app, Azure AD sends you back an <code>id_token</code>, which proves their identity to <em>your application</em> – but when your application also needs to connect to a resource, we need an access token specifically for that resource. In short, the <code>id_token</code>is for your application to use, while the <code>access_token</code> is for you to send to other resources (APIs) your app is using.</p>
<h2>
Scopes & permissions
</h2>
<p>Scopes can be thought of as permissions within a resource – for example, Microsoft Graph exposes Office 365 scopes like <code>Mail.Read</code> (read the user’s mail) or <code>Files.Read</code> (read the user’s OneDrive files). In this case, the <em>resource</em> is <code>https://graph.microsoft.com</code> while the permission is <code>Mail.Read</code>. Putting these together gives us our full <em>scope</em>: <code>https://graph.microsoft.com/Mail.Read</code>.</p>
<p>Once we know our scope, we need to ask the user if it's ok to use that scope. This is called <em>consent</em>. If the user consents to <code>https://graph.microsoft.com/Mail.Read</code>, our application will get an access token it can use to read a user’s mailbox.</p>
<p>You can think of it like permissions on your phone – when you install a new app, most smartphones will say "This app would like to use your location. Is that OK?"</p>
<p><a href="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgdme3e3xdulzl5q9197n.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgdme3e3xdulzl5q9197n.png" alt="03-phone-perm"></a></p>
<p>What makes scopes so useful in building user trust is that our users can allow other apps to access their data without having to give those other apps full access to everything. In my phone example, if I grant app ABC access to my location, that’s all the app receives – it can’t also read my contacts, for example, unless the app asks and the user allows that to happen.</p>
<h2>
Let's order some business cards!
</h2>
<p>Imagine we’ve built an app that handles ordering business cards. </p>
<p>It would be really helpful to pre-populate the user’s info in the card, so we request the <code>User.Read</code> permission from <strong>Microsoft Graph</strong>. This includes profile information like their display name, email address, etc. </p>
<blockquote>
<p>If you're keeping score at home, that means we'll need this scope: <code>https://graph.microsoft.com/User.Read</code></p>
</blockquote>
<p>But we also want to give them an option to save their design, or upload a custom design. We implement a normal file picker so a user can upload from their PC, but wouldn’t it be cool to offer upload and download from <strong>OneDrive</strong> or a <strong>SharePoint site</strong>? To do that, we will need the <code>Files.ReadWrite</code> and <code>Sites.ReadWrite.All</code> permissions. </p>
<blockquote>
<p>Now our scope list is:</p>
<ul>
<li><code>https://graph.microsoft.com/User.Read</code></li>
<li><code>https://graph.microsoft.com/Files.ReadWrite</code></li>
<li><code>https://graph.microsoft.com/Sites.ReadWrite.All</code></li>
</ul>
</blockquote>
<p>Remember, however - not all users will want to do this. Some users won't have a OneDrive or Sharepoint site, other users may have all of their files locally.</p>
<blockquote>
<p>Sample of users actually using these permissions:</p>
<ul>
<li>
<code>https://graph.microsoft.com/User.Read</code>: this is for signing into the app - 100% of users will use this</li>
<li>
<code>https://graph.microsoft.com/Files.ReadWrite</code>: this is for accessing OneDrive - say, 40% of users will do this</li>
<li>
<code>https://graph.microsoft.com/Sites.ReadWrite.All</code>: this is for accessing a SharePoint site - we'll estimate 10% of users want to do this</li>
</ul>
</blockquote>
<p>In Azure AD v1 (which you may also hear being referred to as ‘the v1 endpoint’), we had to add all the permissions we needed upfront - configured on the App Registration in Azure AD. This meant users would also need to allow those permissions upfront, just to sign-in to our app. This is called <em>‘static consent’</em>. Let's see what that looks like:</p>
<p><a href="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgrmoycrr259nyx7qyes8.jpg" class="article-body-image-wrapper"><img src="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgrmoycrr259nyx7qyes8.jpg" alt="dev-view"></a></p>
Developer/admin view
<p>Meanwhile, this is what your users will see when signing into the app:</p>
<p><a href="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fe51613aiw5k2f5790nlb.jpg" class="article-body-image-wrapper"><img src="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fe51613aiw5k2f5790nlb.jpg" alt="02-consent-screen"></a></p>
What your users will see
<p>Whoa! That's a lot of permissions. I just wanted to order some business cards, but now I'm being asked to let this app I've never used write to my files? Access sharepoint sites? No way, I'm outta here - I'll go order business cards somewhere else.</p>
<p><a href="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftyrewb074k5hju00xiqw.gif" class="article-body-image-wrapper"><img src="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ftyrewb074k5hju00xiqw.gif" alt="blocks"></a></p>
Not today
<p>If the user <em>does</em> allow those permissions, it means your application is going to have permission to read a user’s files, even if you never need to read their files! Not particularly optimal for our app or our users.</p>
<h2>
Dynamic consent
</h2>
<p>The Azure AD v2 (aka Microsoft identity platform, aka ‘the v2 endpoint’) scope & permission system fixes this, by allowing <em>dynamic consent</em> – instead of requiring the developers to declare all permissions upfront, v2 allows developers to ask at any time. In our business card example, this means a user is only asked to consent to their profile data being read on sign-in, while users who elect to 'upload a custom design from OneDrive' (by clicking a button in your app, etc) will be prompted for the <code>Files.Read</code> permission when they perform the action. Better for our users, better for us as the developer 😊</p>
<p><a href="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmburgmmuh88i9e0yr796.jpg" class="article-body-image-wrapper"><img src="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmburgmmuh88i9e0yr796.jpg" alt="v2-consent"></a></p>
Much better
<h2>
/.default
</h2>
<p>Now that we’ve written two pages of what got us here, let’s get to the point of this post – the <code>/.default</code> scope. The short description is the <code>/.default</code> scope is a shortcut back to the Azure AD v1 behavior (e.g., static consent). When we request a <code>/.default</code> scope (for example, <code>https: //graph.microsoft.com/.default</code>), our users will be asked to consent to all of the configured permissions present on our Azure AD App Registration for that specific resource (e.g., https: //graph.microsoft.com). </p>
<p>In our business card example, this means that a user would be prompted to consent to all permissions we’ve configured upfront.</p>
<p>For migrating old apps that currently use Azure AD v1 (including apps that use ADAL, the Active Directory Authentication Library), the <code>/.default</code> scope offers an easier migration path, as the developer gets the same behavior from the v2 endpoint keeping the experience consistent.</p>
<h2>
When <strong>must</strong> I use the /.default scope?
</h2>
<p>There are two extra scenarios where the <code>/.default</code> scope is required: </p>
<ul>
<li><p><a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow" rel="noopener noreferrer"><strong><code>client_credentials</code></strong></a>, where our app is making service-to-service calls or using application-only permissions (also known as application app roles in Azure AD parlance), or</p></li>
<li><p>when using the <a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-on-behalf-of-flow" rel="noopener noreferrer"><strong><code>on-behalf-of</code></strong> (OBO) flow</a>, where our API is making calls on behalf of the user to a different API; something like this: client app --> our API --> Graph API.</p></li>
</ul>
<p>In both scenarios above, there is no user interface and no user interaction. In the <code>client_credentials</code> case, an application is using its own identity (not that of a user), so there is no concept of dynamic consent, as the application must statically configure the permissions that it needs - who would it ask for extra permission at runtime if there is no user present?</p>
<p><code>on-behalf-of</code> (OBO) is similar; since the backend API is making a request to another API, there is no user interface for asking for additional permissions, so permissions must be set statically between APIs. This means that your scopes in token requests for service-to-service and on-behalf-of flows must use a scope of <code>your-app-id-uri/.default</code> – e.g., https: //graph.microsoft.com/.default, or https: //your-app.your-co.com/.default. </p>
<p>You can read up more on the official doc about /.default, scopes & consent here: <a href="proxy.php?url=https://docs.microsoft.com/azure/active-directory/develop/v2-permissions-and-consent" rel="noopener noreferrer">Microsoft identity platform scopes, permissions, and consent</a>.</p>
<p>Check us out live Tuesdays & Thursdays at 10a ET, 7a PT at <a href="proxy.php?url=https://aka.ms/425show" rel="noopener noreferrer">https://aka.ms/425show</a>! </p>
azureazureadidentitysecurityConnecting to Azure blob storage from React using Azure.Identity!John Patrick DandisonWed, 05 Aug 2020 21:06:43 +0000
https://dev.to/425show/connecting-to-azure-blob-storage-from-react-using-azure-identity-16l0
https://dev.to/425show/connecting-to-azure-blob-storage-from-react-using-azure-identity-16l0<p><iframe width="710" height="399" src="proxy.php?url=https://www.youtube.com/embed/hLHIpN3qN_M?start=4271">
</iframe>
</p>
<p><em>take me straight to the code!</em></p>
<p>On stream, after our great talk with Simon Brown, we decided to dig into building a fully client-side app that connects to Azure Blob Storage.</p>
<h2>
What is blob storage?
</h2>
<p>Just that - storage for blobs of data, big and small. Historically it stood for 'Binary Large OBjects' although that was mostly used in SQL circles for storing data in databases. Regardless of the origin, blob storage (aka S3 at AWS) is a staple of modern apps. </p>
<p>Azure Blob storage has some unique features that make designing apps even easier. For example - a <a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/storage/common/scalability-targets-standard-account" rel="noopener noreferrer">standard storage account</a> has a maximum egress of up to <em>50 Gb/s!</em> - that's 50 Gb/s that your app server/platform doesn't have to handle on its own. </p>
<p>This works for upload too - standard storage in the US has a max ingress of 10Gb/s. Clients uploading or downloading directory to and from storage accounts can have a <em>massive</em> impact on your app's design, cost and scalability.</p>
<p>We've seen customers leverage this over the years - for example, streaming large media assets (think videos, pictures, datasets) from blob storage <em>directly</em> to clients instead of proxying through your app server. </p>
<p>Take this scenario - I want to share videos and pictures with people I work with, or with the internet as a whole. Previously, I would have had some storage - a network share, NAS device - and my app server would have some sort of API exposed to access that data. My app would have to send and receive data from clients, which meant my app servers would need enough bandwidth for pushing and pulling all that data around.</p>
<p>By using storage directly, my servers and APIs can direct clients to upload and download directly from storage, significantly reducing compute bandwidth requirements, with the benefit of a worldwide footprint of storage locations. </p>
<h2>
But how do we ensure secure access?
</h2>
<p>Historically, we used <a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview" rel="noopener noreferrer">shared access signatures (SAS)</a> tokens with storage, which are time- and operation-limited URLs with a signature for validation. For example - I'd like <strong>Read</strong> access to <strong><code>https://storageaccount.blob.core.windows.net/container/blob1.mp4</code></strong> for the next <strong>60 seconds</strong> - this would generate a URL with some parameters, which was then signed with the master storage account key, then the signature was tacked onto the end of the URL. Then we share that URL with whatever client needed to do the operations.</p>
<p>This was cool, except it meant we needed some server-side API or web server to store and manage the master account key, since can't send it directly to the client.</p>
<h2>
Enter Azure AD & Storage Blob Data RBAC
</h2>
<p>If you're familiar with Azure, you know there are two distinct 'planes' - the control plane (the management interface) and the data plane (the actual resource data). I like to think of it as the difference between being able to deploy a VM vs actually having credentials to RDP or SSH into it.</p>
<p><a href="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ffrkorhgfd60sta5zjtub.jpg" class="article-body-image-wrapper"><img src="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ffrkorhgfd60sta5zjtub.jpg" alt="Alt Text"></a>If you've seen this screen before, you've used Azure RBAC </p>
<p>All Azure resources have some degree of control plane role-based-access-control - things like 'Resource group owner' or 'resource group reader' - that allow management operations on those resources. Over time more and more data plane operations have been added, so we can use Azure RBAC for controlling both who can manage the resource as well as who has access to the resource or data itself. The advantage here is furthering the 'least privilege' mantra - a storage key is the key to the proverbial castle, so to speak, so if we can limit operations on an ongoing basis, we can limit the blast radius of any bad actors.</p>
<p>Storage has roles specifically for connecting to the account's data plane - connecting to blobs specifically, for example. In the IAM/role assignments blade for the storage account, note the 'Storage Blob Data...' roles. These give Azure AD accounts (users <em>and</em> service principals) access to the blobs directly. </p>
<p><a href="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frjwifo8x5mjzqtei7514.jpg" class="article-body-image-wrapper"><img src="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frjwifo8x5mjzqtei7514.jpg" alt="Alt Text"></a>The different storage-related roles </p>
<p>We're going to use this to build our client-side blob reader app.</p>
<h2>
Bill of Materials <a></a>
</h2>
<p>We're going to:</p>
<ul>
<li>deploy a storage account to Azure</li>
<li>add a user to the <code>Storage Blob Data Reader</code> role</li>
<li>Register an app in Azure AD to represent our React app</li>
<li>Create a quick-and-dirty React app</li>
<li>Add Azure Identity dependencies</li>
<li>Authenticate the user and list out our blobs</li>
</ul>
<h2>
Setting up our blob storage account
</h2>
<p><em>Want to use CLI but don't have it setup yet? Try <a href="proxy.php?url=https://shell.azure.com/" rel="noopener noreferrer">Azure Cloud Shell</a> straight from your browser, or <a href="proxy.php?url=https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest" rel="noopener noreferrer">read here</a> on getting it installed for your platform</em></p>
<p>CLI for a standard, LRS, v2 storage account:</p>
<div class="highlight js-code-highlight">
<pre class="highlight shell"><code>
az storage account create <span class="nt">--name</span> somednssafename <span class="nt">--resource-group</span> some-resource-group-name <span class="nt">--kind</span> StorageV2 <span class="nt">--sku</span> Standard_LRS <span class="nt">--location</span> eastus
</code></pre>
</div>
<p>First, create a blob storage account in Azure. General Purpose v2 is fine for what we're building. I use Locally-redundant storage (LRS) for my account, but pick what's best based on your requirements.</p>
<p><a href="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fr1js3tfm4uqt0kiprtje.jpg" class="article-body-image-wrapper"><img src="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fr1js3tfm4uqt0kiprtje.jpg" alt="azure storage creation pane 1"></a>Storage account names are pretty particular </p>
<p>Once it's created (may take a moment or two), we'll go to the IAM blade of your storage account. Here we need to add a role assignment of Storage Blob Data Reader to a user you're going to sign-in with. This could be yourself or a test account. Start by clicking 'Add Role Assignment,' which should open up a side pane. Here we'll choose 'Storage Blob Data Reader,' and the user to whom you're allowing access. Make sure to click Save at the bottom.</p>
<p><a href="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frjwifo8x5mjzqtei7514.jpg" class="article-body-image-wrapper"><img src="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frjwifo8x5mjzqtei7514.jpg" alt="add role assignment pane for blob storage"></a>Choose Storage Blob Data Reader </p>
<p>Now let's add some test data. We used some images, but you can use whatever files you want. First, under Containers in the side menu, add a new container, making sure to leave it as Private. Public will open that container to the internet with no authentication, so be careful here! </p>
<p><a href="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdvtm4z4xnmlqdz1zpkpz.jpg" class="article-body-image-wrapper"><img src="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fdvtm4z4xnmlqdz1zpkpz.jpg" alt="create container - name, private"></a></p>
<p>Once you've created your container, click it and you can upload files directly from the web interface. Upload a few files, it doesn't really matter what they are. We used pictures, but you can use whatever's handy.</p>
<p>Great! Now we're finished with our storage account. You can download <a href="proxy.php?url=https://aka.ms/storageexplorer" rel="noopener noreferrer">Storage Explorer</a> for a desktop app to view/upload/download to and from your storage accounts. </p>
<p>On to Azure AD!</p>
<h2>
Azure AD setup
</h2>
<p>In Azure AD, we need to register an application. This is essentially telling Azure AD "hey, here's an app, at a specific set of URLs, that needs permissions to do things - either sign-in users, and/or access resources protected by Azure AD."</p>
<p>CLI to register a new app:</p>
<div class="highlight js-code-highlight">
<pre class="highlight shell"><code>
az ad app create <span class="nt">--reply-urls</span> <span class="s2">"http://localhost:3000/"</span> <span class="se">\</span>
<span class="nt">--oauth2-allow-implicit-flow</span> <span class="s2">"true"</span> <span class="se">\</span>
<span class="nt">--display-name</span> msaljs-to-blobs <span class="se">\</span>
<span class="nt">--required-resource-access</span> <span class="s2">"[{</span><span class="se">\"</span><span class="s2">resourceAppId</span><span class="se">\"</span><span class="s2">: </span><span class="se">\"</span><span class="s2">00000003-0000-0000-c000-000000000000</span><span class="se">\"</span><span class="s2">,</span><span class="se">\"</span><span class="s2">resourceAccess</span><span class="se">\"</span><span class="s2">: [{</span><span class="se">\"</span><span class="s2">id</span><span class="se">\"</span><span class="s2">: </span><span class="se">\"</span><span class="s2">e1fe6dd8-ba31-4d61-89e7-88639da4683d</span><span class="se">\"</span><span class="s2">,</span><span class="se">\"</span><span class="s2">type</span><span class="se">\"</span><span class="s2">: </span><span class="se">\"</span><span class="s2">Scope</span><span class="se">\"</span><span class="s2">}]},{</span><span class="se">\"</span><span class="s2">resourceAppId</span><span class="se">\"</span><span class="s2">: </span><span class="se">\"</span><span class="s2">e406a681-f3d4-42a8-90b6-c2b029497af1</span><span class="se">\"</span><span class="s2">,</span><span class="se">\"</span><span class="s2">resourceAccess</span><span class="se">\"</span><span class="s2">: [{</span><span class="se">\"</span><span class="s2">id</span><span class="se">\"</span><span class="s2">: </span><span class="se">\"</span><span class="s2">03e0da56-190b-40ad-a80c-ea378c433f7f</span><span class="se">\"</span><span class="s2">,</span><span class="se">\"</span><span class="s2">type</span><span class="se">\"</span><span class="s2">: </span><span class="se">\"</span><span class="s2">Scope</span><span class="se">\"</span><span class="s2">}]}]"</span>
</code></pre>
</div>
<p>To register a new app in the portal, head over to the Azure Active Directory blade; alternatively, go to the <a href="proxy.php?url=https://aad.portal.azure.com" rel="noopener noreferrer">AAD portal</a> - then App Registrations.</p>
<p>We're going to register a new app - give it a name, choose an audience and a platform. For us, we only want users in our directory to login, so we'll stick with single tenant. More on multitenancy in a different post :). Then we need our platform - ours is a client app, so we're going to use that for now. </p>
<p><a href="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fagaa0kafi1pk4evkazve.jpg" class="article-body-image-wrapper"><img src="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fagaa0kafi1pk4evkazve.jpg" alt="azure ad app registration page"></a>Yours should look something like this </p>
<p>Now we'll have our app registered! Almost done. We need to go grab a couple of extra pieces of info. Once the app is registered, from the overview blade, grab the Application (Client) Id and tenant ID and stash them off somewhere, like notepad or sticky notes.</p>
<p><a href="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fazg8an3ewdded7ynbp0f.jpg" class="article-body-image-wrapper"><img src="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fazg8an3ewdded7ynbp0f.jpg" alt="app-registration-overview"></a></p>
<p>If you used the CLI, the appId will be in the returned data from the <code>az ad app create</code> command:</p>
<p><a href="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fugyz2pagwqqpe13r1sh5.jpg" class="article-body-image-wrapper"><img src="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fugyz2pagwqqpe13r1sh5.jpg" alt="az ad app create output"></a></p>
<p>We need to give our app permission to the storage service. We could do this in code when we need it, but we'll do it now since we're already here. Under the API Permissions menu, we're going to add a new one, then choose Azure Storage. There will only be one delegated permission, <code>user_impersonation.</code> Add this, make sure to click Save at the bottom.</p>
<p><a href="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnpdqdcnhias8yjoa7ib1.jpg" class="article-body-image-wrapper"><img src="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnpdqdcnhias8yjoa7ib1.jpg" alt="choose an api"></a>Choose Azure Storage </p>
<p><a href="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4qpmfibfrbkdte19a2no.jpg" class="article-body-image-wrapper"><img src="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4qpmfibfrbkdte19a2no.jpg" alt="storage scope selection"></a>There's only one delegated permission, user_impersonation </p>
<p>If you're using the CLI, you're already done - we added those permissions in the <code>requiredResourceAccess</code> parameter of our command.</p>
<p>CLI or portal, by the end, under the 'API permissions' blade you should see something like this:</p>
<p><a href="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3skny3oc1ozz8all1t15.jpg" class="article-body-image-wrapper"><img src="proxy.php?url=https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3skny3oc1ozz8all1t15.jpg" alt="configured permissions"></a></p>
<h2>
Now we can write some code!
</h2>
<p>We've made it! We're ready to build our app. Let's start with creating a new React app. I'm using <code>create-react-app</code> because I'm not a React pro - use what you're comfortable with.</p>
<div class="highlight js-code-highlight">
<pre class="highlight shell"><code>
npx create-react-app msaljs-to-blobs <span class="nt">--typescript</span>
<span class="nb">cd </span>msaljs-to-blobs
</code></pre>
</div>
<p>Now we've got our React app, let's add a few dependencies. We're using the Azure.Identity libraries for this as it's what the storage library uses. </p>
<p>We can add these two to our <code>dependencies</code> in package.json and do an <code>npm i</code> to install.</p>
<div class="highlight js-code-highlight">
<pre class="highlight json"><code><span class="w">
</span><span class="s2">"dependencies: {
"</span><span class="err">@azure/identity</span><span class="s2">": "</span><span class="mf">1.0</span><span class="err">.</span><span class="mi">3</span><span class="s2">",
"</span><span class="err">@azure/storage-blob</span><span class="s2">": "</span><span class="err">^</span><span class="mf">12.2</span><span class="err">.</span><span class="mi">0</span><span class="err">-preview.</span><span class="mi">1</span><span class="s2">"
}
</span></code></pre>
</div>
<p>Next we're going to create a new component. I've got a new one called blobView.tsx:</p>
<div class="highlight js-code-highlight">
<pre class="highlight tsx"><code>
<span class="k">import</span> <span class="nx">React</span> <span class="k">from</span> <span class="dl">'</span><span class="s1">react</span><span class="dl">'</span><span class="p">;</span>
<span class="c1">// we'll need InteractiveBrowserCredential here to force a user to sign-in through the browser</span>
<span class="k">import</span> <span class="p">{</span> <span class="nx">InteractiveBrowserCredential</span> <span class="p">}</span> <span class="k">from</span> <span class="dl">"</span><span class="s2">@azure/identity</span><span class="dl">"</span><span class="p">;</span>
<span class="c1">// we're using these objects from the storage sdk - there are others for different needs</span>
<span class="k">import</span> <span class="p">{</span> <span class="nx">BlobServiceClient</span><span class="p">,</span> <span class="nx">BlobItem</span> <span class="p">}</span> <span class="k">from</span> <span class="dl">"</span><span class="s2">@azure/storage-blob</span><span class="dl">"</span><span class="p">;</span>
<span class="kr">interface</span> <span class="nx">Props</span> <span class="p">{}</span>
<span class="kr">interface</span> <span class="nx">State</span> <span class="p">{</span>
<span class="c1">// a place to store our blob item metadata after we query them from the service</span>
<span class="nl">blobsWeFound</span><span class="p">:</span> <span class="nx">BlobItem</span><span class="p">[];</span>
<span class="nl">containerUrl</span><span class="p">:</span> <span class="kr">string</span><span class="p">;</span>
<span class="p">}</span>
<span class="k">export</span> <span class="kd">class</span> <span class="nc">BlobView</span> <span class="kd">extends</span> <span class="nc">React</span><span class="p">.</span><span class="nx">Component</span><span class="o"><</span><span class="nx">Props</span><span class="p">,</span> <span class="nx">State</span><span class="o">></span> <span class="p">{</span>
<span class="na">state</span><span class="p">:</span> <span class="nx">State</span><span class="p">;</span>
<span class="nf">constructor</span><span class="p">(</span><span class="na">props</span><span class="p">:</span> <span class="nx">Props</span><span class="p">,</span> <span class="na">state</span><span class="p">:</span> <span class="nx">State</span><span class="p">)</span> <span class="p">{</span>
<span class="c1">//super(state);</span>
<span class="k">super</span><span class="p">(</span><span class="nx">props</span><span class="p">,</span> <span class="nx">state</span><span class="p">);</span>
<span class="k">this</span><span class="p">.</span><span class="nx">state</span> <span class="o">=</span> <span class="p">{</span> <span class="na">blobsWeFound</span><span class="p">:</span> <span class="p">[],</span> <span class="na">containerUrl</span><span class="p">:</span> <span class="dl">""</span> <span class="p">}</span>
<span class="p">}</span>
<span class="c1">// here's our azure identity config</span>
<span class="k">async</span> <span class="nf">componentDidMount</span><span class="p">()</span> <span class="p">{</span>
<span class="kd">const</span> <span class="nx">signInOptions</span> <span class="o">=</span> <span class="p">{</span>
<span class="c1">// the client id is the application id, from your earlier app registration</span>
<span class="na">clientId</span><span class="p">:</span> <span class="dl">"</span><span class="s2">01dd2ae0-4a39-43a6-b3e4-742d2bd41822</span><span class="dl">"</span><span class="p">,</span>
<span class="c1">// this is your tenant id - the id of your azure ad tenant. available from your app registration overview</span>
<span class="na">tenantId</span><span class="p">:</span> <span class="dl">"</span><span class="s2">98a34a88-7940-40e8-af71-913452037f31</span><span class="dl">"</span>
<span class="p">}</span>
<span class="kd">const</span> <span class="nx">blobStorageClient</span> <span class="o">=</span> <span class="k">new</span> <span class="nc">BlobServiceClient</span><span class="p">(</span>
<span class="c1">// this is the blob endpoint of your storage acccount. Available from the portal </span>
<span class="c1">// they follow this format: <accountname>.blob.core.windows.net for Azure global</span>
<span class="c1">// the endpoints may be slightly different from national clouds like US Gov or Azure China</span>
<span class="dl">"</span><span class="s2">https://<your storage account name>.blob.core.windows.net/</span><span class="dl">"</span><span class="p">,</span>
<span class="k">new</span> <span class="nc">InteractiveBrowserCredential</span><span class="p">(</span><span class="nx">signInOptions</span><span class="p">)</span>
<span class="p">)</span>
<span class="c1">// this uses our container we created earlier - I named mine "private"</span>
<span class="kd">var</span> <span class="nx">containerClient</span> <span class="o">=</span> <span class="nx">blobStorageClient</span><span class="p">.</span><span class="nf">getContainerClient</span><span class="p">(</span><span class="dl">"</span><span class="s2">private</span><span class="dl">"</span><span class="p">);</span>
<span class="kd">var</span> <span class="nx">localBlobList</span> <span class="o">=</span> <span class="p">[];</span>
<span class="c1">// now let's query our container for some blobs!</span>
<span class="k">for</span> <span class="k">await </span><span class="p">(</span><span class="kd">const</span> <span class="nx">blob</span> <span class="k">of</span> <span class="nx">containerClient</span><span class="p">.</span><span class="nf">listBlobsFlat</span><span class="p">())</span> <span class="p">{</span>
<span class="c1">// and plunk them in a local array...</span>
<span class="nx">localBlobList</span><span class="p">.</span><span class="nf">push</span><span class="p">(</span><span class="nx">blob</span><span class="p">);</span>
<span class="p">}</span>
<span class="c1">// ...that we push into our state</span>
<span class="k">this</span><span class="p">.</span><span class="nf">setState</span><span class="p">({</span> <span class="na">blobsWeFound</span><span class="p">:</span> <span class="nx">localBlobList</span><span class="p">,</span> <span class="na">containerUrl</span><span class="p">:</span> <span class="nx">containerClient</span><span class="p">.</span><span class="nx">url</span> <span class="p">});</span>
<span class="p">}</span>
<span class="nf">render</span><span class="p">()</span> <span class="p">{</span>
<span class="k">return </span><span class="p">(</span>
<span class="p"><</span><span class="nt">div</span><span class="p">></span>
<span class="p"><</span><span class="nt">table</span><span class="p">></span>
<span class="p"><</span><span class="nt">thead</span><span class="p">></span>
<span class="p"><</span><span class="nt">tr</span><span class="p">></span>
<span class="p"><</span><span class="nt">th</span><span class="p">></span>blob name<span class="p"></</span><span class="nt">th</span><span class="p">></span>
<span class="p"><</span><span class="nt">th</span><span class="p">></span>blob size<span class="p"></</span><span class="nt">th</span><span class="p">></span>
<span class="p"><</span><span class="nt">th</span><span class="p">></span>download url<span class="p"></</span><span class="nt">th</span><span class="p">></span>
<span class="p"></</span><span class="nt">tr</span><span class="p">></span>
<span class="p"></</span><span class="nt">thead</span><span class="p">></span>
<span class="p"><</span><span class="nt">tbody</span><span class="p">></span><span class="si">{</span>
<span class="k">this</span><span class="p">.</span><span class="nx">state</span><span class="p">.</span><span class="nx">blobsWeFound</span><span class="p">.</span><span class="nf">map</span><span class="p">((</span><span class="nx">x</span><span class="p">,</span> <span class="nx">i</span><span class="p">)</span> <span class="o">=></span> <span class="p">{</span>
<span class="k">return</span> <span class="p"><</span><span class="nt">tr</span> <span class="na">key</span><span class="p">=</span><span class="si">{</span><span class="nx">i</span><span class="si">}</span><span class="p">></span>
<span class="p"><</span><span class="nt">td</span><span class="p">></span><span class="si">{</span><span class="nx">x</span><span class="p">.</span><span class="nx">name</span><span class="si">}</span><span class="p"></</span><span class="nt">td</span><span class="p">></span>
<span class="p"><</span><span class="nt">td</span><span class="p">></span><span class="si">{</span><span class="nx">x</span><span class="p">.</span><span class="nx">properties</span><span class="p">.</span><span class="nx">contentLength</span><span class="si">}</span><span class="p"></</span><span class="nt">td</span><span class="p">></span>
<span class="p"><</span><span class="nt">td</span><span class="p">></span>
<span class="p"><</span><span class="nt">img</span> <span class="na">src</span><span class="p">=</span><span class="si">{</span><span class="k">this</span><span class="p">.</span><span class="nx">state</span><span class="p">.</span><span class="nx">containerUrl</span> <span class="o">+</span> <span class="nx">x</span><span class="p">.</span><span class="nx">name</span><span class="si">}</span> <span class="p">/></span>
<span class="p"></</span><span class="nt">td</span><span class="p">></span>
<span class="p"></</span><span class="nt">tr</span><span class="p">></span>
<span class="p">})</span>
<span class="si">}</span>
<span class="p"></</span><span class="nt">tbody</span><span class="p">></span>
<span class="p"></</span><span class="nt">table</span><span class="p">></span>
<span class="p"></</span><span class="nt">div</span><span class="p">></span>
<span class="p">)</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre>
</div>
<p>And that's it! Our <code>App.tsx</code> just includes a reference to this component. The Azure Identity libraries handle logging you in, asking for consent and putting tokens in the correct headers, absolving the developer from having to worry with token storage.</p>
<p>Run the app and you should see the blobs listed in your storage account.</p>
<h2>
Stay connected!
</h2>
<p>We stream live twice a week at <a href="proxy.php?url=https://twitch.tv/425show" rel="noopener noreferrer">twitch.tv/425Show</a>! Join us:</p>
<ul>
<li>11a - 1p eastern US time Tuesdays</li>
<li>11a - 12n eastern US time Fridays for Community Hour</li>
</ul>
<p>Be sure to send your questions to us here, on twitter or email: <a href="proxy.php?url=mailto:[email protected]">[email protected]</a>!</p>
<p>Until next time,<br>
JPD</p>
azurereacttutorialjavascriptCreating a Teams presence publisher with Azure Functions, local and cloudJohn Patrick DandisonTue, 24 Mar 2020 15:04:11 +0000
https://dev.to/jpda/creating-a-teams-presence-publisher-with-azure-functions-local-and-cloud-2i9g
https://dev.to/jpda/creating-a-teams-presence-publisher-with-azure-functions-local-and-cloud-2i9g<p><em>Want to go straight to the code? <a href="proxy.php?url=https://github.com/jpda/i-come-bearing-presence">Here it is</a></em></p>
<h2>
Teams Presence
</h2>
<p>Presence info has been around a long time - we had it in Skype for Business and its predecessors - Lync, OCS, LCS, etc. There were even devices you could buy with lights indicating presence - super useful, especially in more ‘open’ offices to achieve some focus.</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--xdZA6uSl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/2020-03-24-15-36-02.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--xdZA6uSl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/2020-03-24-15-36-02.png" alt="embrava blynclight" width="288" height="260"></a></p>
<p>The Embrava lights were popular, I had one on my desk, in fact - but they don’t seem to work with Teams presence. Combine that with more recent news:</p>
<ul>
<li><a href="proxy.php?url=https://developer.microsoft.com/en-us/graph/blogs/microsoft-graph-presence-apis-are-now-available-in-public-preview/">Presence is now in Microsoft Graph</a></li>
<li><a href="proxy.php?url=https://news.microsoft.com/covid-19-response/">Working from home is a new reality for many people</a></li>
<li><a href="proxy.php?url=https://ed.sc.gov/districts-schools/schools/district-and-school-closures/">Working from home with an entire family</a></li>
</ul>
<p>And since I have little ones at home, it got me thinking about ways to let them know if I’m on the phone or not. I wanted something multitenant that could publish presence for anyone, not just me. In that spirit - any solution worth a solution is worth an over-engineered solution, right? <em>Right?!</em> Click the video to see it in action.</p>
<p><a href="proxy.php?url=https://www.youtube.com/watch?v=ujDyD63KdbA"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--qQnhL2XU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://img.youtube.com/vi/ujDyD63KdbA/0.jpg" alt="video" width="480" height="360"></a></p>
<p>At home I’ve got a few bulbs, in my office but also upstairs so my family can see if I’m on the phone or not.</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--kgxYJCDu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/2020-03-24-16-06-04.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--kgxYJCDu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/2020-03-24-16-06-04.png" alt="teams presence bulbs" width="" height=""></a></p>
<p>Same for the office - rather than a small light, I felt the size of this lamp really reinforced the message.</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--F8QmeC6G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/2020-03-24-16-06-45.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--F8QmeC6G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/2020-03-24-16-06-45.png" alt="office presence" width="800" height="1067"></a></p>
<h2>
Let’s build!
</h2>
<p>Here’s what we’re going to build:</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--8ik7KnMr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/2020-03-24-16-40-00.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--8ik7KnMr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/2020-03-24-16-40-00.png" alt="runtime sketch 1" width="800" height="450"></a></p>
<p>A presence-poller running in Azure Functions, which…</p>
<ul>
<li>logs in a user and stores a <code>refresh_token</code>,</li>
<li>polls the Graph for Presence updates,</li>
<li>stores the update (if any) in Azure Table Storage, and</li>
<li>notifies subscribers over a Service Bus topic.</li>
</ul>
<p>Plus, we have a local Azure Function, which…</p>
<ul>
<li>runs in a container on a raspberry pi,</li>
<li>creates a subscription to the Service Bus topic, and</li>
<li>interacts with the local Philips Hue Hub over HTTP</li>
</ul>
<p>There is no webhook or push support yet for Presence updates, so until then, we need to poll for it ourselves. Rather than having multiple devices poll for the same data, I thought it prudent to poll from one component, then push to as many subscribers as are interested. The local Function acts as a shim for interacting with whatever local devices are interested in presence updates. In my case, two Philips Hue Hubs - one in the office, one at home. This also helps with firewall silliness - since the poller runs cloud side but publishes changes to a topic, our local function only needs outbound internet access - no inbound access required.</p>
<h2>
Presence updater job
</h2>
<p>The <a href="proxy.php?url=https://github.com/jpda/i-come-bearing-presence/blob/2d633408772be906a9c1423eef31e8ec6a4447c2/ComeBearingPresence.Func/PresencePublisher.cs#L103"><code>presence-refresh</code></a> function calls <code>CheckAndUpdatePresence</code>, which does the bulk of the heavy lifting. This timer job is responsible for actually pinging the Graph to fetch a user’s current presence, check it against the last known presence, and, if different, publish it to subscribers. I’m using the user’s ID as the service bus topic name - this way we have enough information at runtime to know which topic to use for sending updates.</p>
<p>First, we need to know which user for which we want presence data. This is driven by a store of accounts in Table storage. The key here is <em>how</em> the accounts get into the list in the first place - and how we get access tokens for querying the graph.</p>
<h2>
Authenticating users and asking for consent
</h2>
<p>Of course, being the Graph, everything here is authenticated via Azure AD. We need the <code>Presence.Read</code> permission, which (as of March 2020) is an admin-consentable permission <em>only.</em> This means that only an administrator of an Azure AD tenant can consent to apps using this permission. There are some reasons we can infer from that - a malicious app that knows your presence could risk far more information exposure than your organization (or you) would be comfortable with. Once we have an app with that consent, we still need to capture individual user consent (you can, of course, consent tenant-wide as an administrator) and record that this user would like to get their presence info published to them.</p>
<p>To configure this, first we need to register an app. The app itself needs <code>Presence.Read</code> but little else. We’ll also need certain reply URLs registered. You can read more about this process <a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app">here</a>. You’ll get a client ID (application ID) and secret (a random string, or a certificate). Save them off, we’ll need them in a little bit. Note I’m using a certificate here for authentication, since this <code>Presence.Read</code> is a sensitive scope. We also need a way to securely share that secret (be it a string or a certificate) with our Function.</p>
<p>We’ll use KeyVault and Managed Identity for accessing the secret. This way, our Function will have an identity usable only by itself - we’ll assign this identity permissions in KeyVault to pull the secrets, so we don’t keep any secrets in code. Alternatively, you can assign the Presence.Read permission directly to your Managed Identity, removing the need for KeyVault entirely. Read on <a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/key-vault/managed-identity">here</a> for more info on using KeyVault with Managed Identities.</p>
<p>Alternatively, if using certificate authentication to Azure AD, you can store the certificate in App Service directly, then reference it using normal Windows certificate store semantics (e.g., CurrentUser/My, etc). In fact, I had to switch to this for testing, as I was having issues getting connected to KeyVault reliably.</p>
<p>Next, let’s get into the <code>auth-start</code> and <code>auth-end</code> functions, which actually authenticate our users.</p>
<h3>
<code>auth-start</code>
</h3>
<p>This function is responsible for two things - authenticating a user to capture <code>access_</code> and <code>refresh_token</code>s, in addition to storing the user in the PresenceRequests table (e.g., please start polling for presence updates). We need the <code>Presence.Read</code> and <code>offline_access</code> scopes so we get a refresh token. Auth start doesn’t do a whole lot except redirect the user over to Azure AD to authenticate. We’re going to use the <code>authorization_code</code> flow here to keep any tokens out of the user’s browser. After the user signs in, we’ll ask Azure AD to do an HTTP POST back to us with the authorization code in tow. While we could generate that URL ourselves, we can hand it off to MSAL here using <code>ConfidentialClientApplication.GetAuthorizationRequestUrl</code>, making sure to also include <code>response_mode=form_post</code> to ensure we’re keeping the code out of the URL.</p>
<h3>
<code>auth-end</code>
</h3>
<p>This function handles the return trip from Azure AD. Notably, it receives the <code>authorization_code</code> Azure AD generates after a successful sign-in and authorization, sends that <em>back</em> to Azure AD to receive an access_token & refresh_token for the requested scopes. We could, again, handle this ourselves, but better to let a library do the heavy lifting - MSAL’s <code>AcquireTokenByAuthorizationCode</code> handles the interaction with Azure AD, then caches the tokens for us.</p>
<p>Next we store the fact that a user authorized us in tables - this is what the timer job uses to determine which presences to request.</p>
<h2>
Token caching & MSAL
</h2>
<p>Rather than reimplement an entire token cache, since we’re only interested in changing the <em>persistence</em> of the cache and not the mechanisms of how the cache is serialized, MSAL provides us some delegates we can set - namely, <code>BeforeAccess</code> and <code>AfterAccess</code>. Set these delegates to your own methods to handle the last-step persistence (and rehydration) of your preferred storage medium. I’m using table storage here again, so my Before and After access delegates are only concerned with writing and reading the bytes to and from table storage.</p>
<p>Now - <a href="proxy.php?url=https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/token-cache-serialization#token-cache-for-a-web-app-confidential-client-application">if you read through the guidance from the MSAL team in the wiki & in the docs</a> - you’ll notice this:</p>
<blockquote>
<p>A very important thing to remember is that for Web Apps and Web APIs, there should be one token cache per user (per account). You need to serialize the token cache for each account.</p>
</blockquote>
<p>In web apps, this is fairly straightforward to accomplish - a user is signed in, so we usually have some bit of identifying information about them to use as a cache key. This way we can query <em>something</em> like Redis, Cosmos, SQL, etc to fetch our MSAL cache. In our case, however, we don’t have an interactive user except for the first time. In fact, we want to keep a very Microsoft-Flow-esque user experience: sign in, store off tokens, do background work as necessary. This means that when we run our timer job without any sort of user context available, it gets dicey trying to figure out which token cache to use.</p>
<p><code>TokenCacheNotificationArgs</code> (what your BeforeAccess and AfterAccess delegates will receive) don’t have enough user data for us to determine which cache to use, because we don’t have an <code>Account</code> object yet - at timer job runtime, we literally have an empty MSAL object - configured, but no accounts or data yet. That MSAL’s main entry point objects are called ‘Applications’ (<code>PublicClientApplication</code> and <code>ConfidentialClientApplication</code>) is a bit misleading. In reality, the uniqueness per application object is really the token cache itself. What I learned here was that, in fact, we don’t want to use a singleton ConfidentialClientApplication, instead we want a <code>ConfidentialClientApplication</code> <em>per unique token cache</em> which, per the guidance, is per-user. Beyond that, a singleton <code>ConfidentialClientApplication</code> would cause problems with cache serialization - as all accounts within the object at the time are serialized instead of a notion of a ‘swappable’ cache in the same object. Needless to say, I spent a ton of time trying to figure out the best way to go forward.</p>
<p>What I ended up with is an MSAL client factory. Not a big fan of it, but it lets me request an MSAL ConfidentialClientApplication configured with the right caches per user, sent in at runtime. Find it <a href="proxy.php?url=https://github.com/jpda/i-come-bearing-presence/tree/master/ComeBearingPresence.Func/TokenCache">here</a>.</p>
<p>You may also notice a transfer between transient & actual keys. The reason for this is that during the <code>auth-end</code> callback, we have no context to know who the user is - so when we create our CCA to consume the code (<code>AcquireTokenByAuthorizationCode</code>), if we don’t set the cache delegates upfront, they don’t get persisted. This sets the cache before with a random identifier, which is then renamed in storage to the user’s actual identifier once we know who the user is.</p>
<p>Rube Goldberg would be proud.</p>
<p>Once we’ve plowed through this circus, we have a reliable way to get access_tokens for calling the graph. At this point, we’re publishing presence changes from the graph to a topic! Our cloud-side work is done.</p>
<h2>
Publishing to subscribers
</h2>
<p>The Service Bus topics are created per user ID, each subscriber should have its own topic subscription (unless they are competing, in that they are doing the same work). I have two locations, Home & Office, so I have two subscriptions, one for each. If I had multiple <em>instances</em> of a subscriber working on a common goal (e.g., updating the lights at my home), those instances would share the same subscription.</p>
<h2>
Subscribing to updates and manipulating our Philips Hue Hub
</h2>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--_YVz5v-x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/2020-03-24-21-19-22.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--_YVz5v-x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/2020-03-24-21-19-22.png" alt="CNN: new pope smoke" width="800" height="451"></a></p>
<p>OK! Now we’ve gotten our access token and we’re polling the graph. Great! Now we need something to listen for those status changes. This could be anything you want - a light, a sign, or even a puff of smoke (like when they pick a new pope). I’m using Philips Hue bulbs, but my colleague <a href="proxy.php?url=https://twitter.com/mahoekst/status/1215179713888391168?s=20">Matthijs</a> built some wild stuff with his presence and a bunch of boards and hundreds of LEDs, plus a really cool MSAL Device Code flow.</p>
<p>Talking to the Hue Hub means creating the equivalent of an API key on the local Hub, then using that in subsequent requests. I looked at Philips’ online stuff, but you can only have one Hue hub per account? Seems like a strange limitation, but since I have one at home and at work, I figured that wasn’t going to work out. Instead, I’ll just talk to them locally.</p>
<p>Creating an api key to talk to the hub locally is a pretty quick one-time procedure. You can do it manually or automate it - but you’ll need to press the button on your Hub, then run your code within a window of time. See more about creating Hue keys <a href="proxy.php?url=https://developers.meethue.com/develop/get-started-2/">here</a>.</p>
<p>Now on to our <a href="proxy.php?url=https://github.com/jpda/i-come-bearing-presence/tree/master/LocalHueHubSubscriber.Func">LocalHueHubSubscriber</a> function. This function is what runs locally, with a network path to your Hue Hub. There is a dockerfile to build targeting <code>dotnet2.0-arm32v7</code>, which runs on a raspberry pi. This function is what’s going to ping our Hue Hub with new colors depending on status. I have a crude status-to-color map (find that <a href="proxy.php?url=https://github.com/jpda/i-come-bearing-presence/blob/2d633408772be906a9c1423eef31e8ec6a4447c2/LocalHueHubSubscriber.Func/Function1.cs#L25">here</a>) - it uses a Service Bus trigger and when it fires, makes an HTTP call to the hub with the desired color.</p>
<p>On your raspberry pi, all you need is docker. Use the <code>.env.local</code> file to store configuration and push it into the container at runtime. You’ll need your SB topic subscription’s connection string, plus your local Hue Hub’s IP and path to your light group.</p>
<h2>
todo
</h2>
<ul>
<li>UI for adding user endpoints</li>
<li>ARM template for Azure resources</li>
<li>finish code for creating SB topics when new user onboards</li>
</ul>
<p>:bowtie:</p>
<p>Find me at <a href="proxy.php?url=https://twitter.com/AzureAndChill">@AzureAndChill</a> with any questions or concerns!</p>
azureteamspresencegraphUsing NSwag and SwaggerUI with Azure AD B2C-protected APIsJohn Patrick DandisonMon, 02 Dec 2019 23:45:11 +0000
https://dev.to/jpda/using-nswag-and-swaggerui-with-azure-ad-b2c-protected-apis-18l7
https://dev.to/jpda/using-nswag-and-swaggerui-with-azure-ad-b2c-protected-apis-18l7<p>Interested in hosting SwaggerUI/OpenAPI docs for your B2C-protected APIs?</p>
<h2>
NSwag
</h2>
<p>Swagger/OpenAPI is a way of defining your APIs through a common markup - from that markup, you and your customers/API consumers can autogenerate client code for your API. But what if you want to host a decent UI for developers to test your API? SwaggerUI is an interface that has been around for a while - it may look familiar, lots of APIs use it for sharing docs. ‘Test-in-browser’ for APIs is super helpful for devs who are using your API, but what do we do if that API is protected by a <code>Bearer</code> scheme using something like Azure AD B2C? Oauth2 flows aren’t much fun to code by hand and can be a big stumbling block for your API consumers to get started using your API. Anything we can do to lower the barrier to entry will help encourage adoption.</p>
<p>In dotnet, we have the <a href="proxy.php?url=https://github.com/RicoSuter/NSwag">NSwag</a> package that has OpenAPI/Swagger doc auto-generation from your API controllers, plus some client-generation code, and of course, SwaggerUI, all shipped as middleware. More docs on wiring this up for your app can be found <a href="proxy.php?url=https://docs.microsoft.com/en-us/aspnet/core/tutorials/getting-started-with-nswag?view=aspnetcore-3.0&tabs=visual-studio">here</a>. Let’s dig into how we can use <code>UseSwaggerUi3</code> with <code>OAuth2Client</code> to give your consumers a first-class API doc experience using NSwag and B2C.</p>
<h2>
B2C setup
</h2>
<p>First we need a B2C-protected API registration, some scopes exposed by that API and a client app (SwaggerUI) that can request access to those APIs. We’ll also need at least one sign-in policy. You can check out the docs <a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/active-directory-b2c/tutorial-create-tenant">here</a> for getting your B2C tenant created and configured with an Identity Provider and user flows.</p>
<h3>
API app
</h3>
<p>If you already have a B2C-protected API, you can skip this part. First we need to create a new app registration, indicating it’s a webapp/api. Make sure you also give it an App ID URI - this will be important for defining scopes.</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--FzP5FWxC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/00-create-b2c-api.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--FzP5FWxC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/00-create-b2c-api.png" alt="create-b2c-api" title="create a new b2c app, indicating web app/api" width="666" height="695"></a></p>
<p>Next let’s define some scopes. Scopes are permissions that <em>other</em> applications can request from your application. Applications requesting these scopes are only allowed to perform operations within that scope, regardless of what other permissions the user may have. Scopes give us more control over how external apps integrate with our services and the kinds of data/actions they can execute. For example, if you ran a photo sharing service, you may have scopes like ‘PhotoLibrary.Upload’ or ‘PhotoLibrary.Read’ - this way, my super awesome camera app could <em>upload</em> photos to the user’s photo library, but not read or modify photos that are already in the library. Well-scoped APIs are important, especially today when so much of our digital lives is centralized around few providers. Do I really want to give a camera app in the store permission to upload photos to Google Photos but also get access to my Gmail and Calendar? Probably not.</p>
<p>In B2C, defining scopes is done via ‘Published Scopes’ under your API’s app registration. Add whichever scopes you deem appropriate.</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--itinUIbd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/01-create-api-scopes.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--itinUIbd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/01-create-api-scopes.png" alt="create-b2c-api-scopes" title="create API scopes under published scopes" width="800" height="170"></a></p>
<p>If you didn’t define an App ID URI in the earlier step, you’ll need to do it before adding scopes - it’s under the Properties blade of your app registration.</p>
<p>We’ll also need to configure aspnetcore to use <code>Bearer</code> authentication for your APIs. Check <a href="proxy.php?url=https://github.com/Azure-Samples/active-directory-b2c-dotnetcore-webapi/blob/master/B2C-WebApi/Startup.cs">here</a> for a full sample. Without this, your APIs are unprotected and none of the work we do here will have any effect.</p>
<h3>
SwaggerUI client app
</h3>
<p>Next we need to create another app registration in B2C. This app registration represents your SwaggerUI app, which will effectively be a client of your API. Because your API has published scopes, we’ll also want users to be able to request those scopes in the tokens they request via SwaggerUI, so we’ll need to make sure permissions are granted to allow the SwaggerUI client permissions for the appropriate scopes.</p>
<p>Create a new app registration, again marking it as a webapp/api. You’ll also need to make sure to allow implicit flow and set a reply url. You can do this during app creation or within the Properties pane after creation. In this case, I’m using kestrel (not IIS Express), so my reply url is <code>https://localhost:5001/swagger/oauth2-redirect.html</code>.</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--3nW38nGi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/02-create-swagger-ui-client.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--3nW38nGi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/02-create-swagger-ui-client.png" alt="create-swagger-ui-client" title="create swagger ui client app reg" width="723" height="678"></a></p>
<p>Once you’ve created your app registration, we need to allow the SwaggerUI client app to request specific scopes from your API. Note that not all scopes may be appropriate here. For example, if your API exposes operations that don’t make sense for a user to request, or aren’t applicable to your user-facing API (e.g., some sort of batch or ETL process, or a service-to-service method), you may not want to allow a user to request that scope from the UI.</p>
<p>In B2C, go to API Access under the SwaggerUI app registration - Add a new one, find your API from earlier and enable the appropriate scopes.</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--lWsKy_xP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/03-add-api-permissions.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--lWsKy_xP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/03-add-api-permissions.png" alt="add-api-permissions" title="add api permissions to your swaggerui app reg" width="800" height="230"></a></p>
<p>Save that app and our B2C configuration is complete.</p>
<h2>
dotnet setup
</h2>
<p>First let’s add some nuget packages:</p>
<p><code>NSwag.AspNetCore</code> and <code>Microsoft.AspNetCore.Authentication.JwtBearer</code>.</p>
<p>In our actual software, let’s head over to <code>Startup.cs</code> or wherever you have your aspnet startup class. Here, under <code>ConfigureServices</code> we want to add our OpenApi/Swagger doc configuration, in addition to the OAuth2 configuration.</p>
<p>In our OAuth2 configuration, we have a few values to keep in mind. Remember that these are the scopes that are published by your API <em>and</em> the SwaggerUI application registration was assigned access.</p>
<ul>
<li>
<code>b2c_tenant_name</code> is the name of your b2c tenant - in my case, <code>jpdab2c</code>
</li>
<li>Your list of appropriate scopes (in the format of your app id uri/scope name, e.g., <code>jpdab2c.onmicrosoft.com/your-api/read</code>)</li>
<li>Your sign-in flow’s policy ID (this usually starts <code>B2C_1</code>, the example below shows <code>b2c_1_susi_v2</code>, the name of my sign-in policy - this is the specific policy a user would use for sign-in).</li>
<li>Your <code>authorize</code> and <code>token</code> URLs <em>for the specific sign-in policy you want to use for SwaggerUI</em> - this may be different from the one users would use to get into your application, if you wanted different pieces of information or requirements for your developer’s flows vs active users. Note the URLs are policy-specific. You can get this from the metadata discovery document under the ‘Run Now’ view of your user flow/policy, or using this format: <code>https:// **b2c_tenant_name**.b2clogin.com/ **b2c_tenant_name**.onmicrosoft.com/v2.0/.well-known/openid-configuration?p= **B2C_1_user_flow_name**</code>
</li>
</ul>
<div class="highlight js-code-highlight">
<pre class="highlight plaintext"><code>public void ConfigureServices(IServiceCollection services)
{
// snip
services.AddAuthentication(opts => opts.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme)
.AddJwtBearer(opts =>
{
opts.Authority = $"https://login.microsoftonline.com/tfp/b2c_tenant_name.onmicrosoft.com/b2c_1_policy_name/v2.0/";
opts.Audience = "client/application ID of api";
});
// see docs for more config options for AddJwtBearer
// Add security definition and scopes to document
services.AddOpenApiDocument(document =>
{
document.AddSecurity("bearer", Enumerable.Empty<string>(), new OpenApiSecurityScheme
{
Type = OpenApiSecuritySchemeType.OAuth2,
Description = "B2C authentication",
Flow = OpenApiOAuth2Flow.Implicit,
Flows = new OpenApiOAuthFlows()
{
Implicit = new OpenApiOAuthFlow()
{
Scopes = new Dictionary<string, string>
{
{ "https://<b2c_tenant_name>.onmicrosoft.com/your-api/user_impersonation", "Access the api as the signed-in user" },
{ "https://<b2c_tenant_name>.onmicrosoft.com/your-api/read", "Read access to the API"},
{ "https://<b2c_tenant_name>.onmicrosoft.com/your-api/mystery_scope", "Let's find out together!"}
},
AuthorizationUrl = "https://<b2c_tenant_name>.b2clogin.com/<b2c_tenant_name>.onmicrosoft.com/oauth2/v2.0/authorize?p=b2c_1_susi_v2",
TokenUrl = "https://<b2c_tenant_name>.b2clogin.com/<b2c_tenant_name>.onmicrosoft.com/oauth2/v2.0/token?p=b2c_1_susi_v2"
},
}
});
document.OperationProcessors.Add(new AspNetCoreOperationSecurityScopeProcessor("bearer"));
});
//snip
services.AddControllers();
// ...
}
</code></pre>
</div>
<p>Next let’s tell aspnet to enable the UI. The code below includes the entire <code>Configure</code> method used in aspnetcore3’s API template - I’ve included it for posterity but yours may/will look different depending on your configuration.</p>
<p>Here we have a couple of values to note:</p>
<ul>
<li>
<code>ClientId</code> is the client ID/application ID of the SwaggerUI application’s registration. You can get this from the Properties pane of your client app’s registration</li>
<li>
<code>AppName</code> is a display name that shows in the SwaggerUI interface
</li>
</ul>
<div class="highlight js-code-highlight">
<pre class="highlight plaintext"><code>public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseHttpsRedirection();
app.UseRouting();
app.UseAuthentication();
app.UseAuthorization();
// additions for OpenAPI and SwaggerUI
app.UseOpenApi();
app.UseSwaggerUi3(settings =>
{
settings.OAuth2Client = new OAuth2ClientSettings
{
ClientId = "bb893c2d-fca9-446e-92f7-c4a400491005",
AppName = "swagger-ui-client"
};
});
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
});
}
</code></pre>
</div>
<p>Now let’s try it out!</p>
<h2>
Let’s go
</h2>
<p>Run your app, then head to <code>https://localhost:5001/swagger</code>. Your APIs should have been automatically discovered and you’ll notice an ‘Authorize’ button in the top left corner. Click it and you should see a new modal dialog with some options for requesting a token. It’ll look something like this:</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--7IL79kQb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/05-ui1.jpg" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--7IL79kQb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/05-ui1.jpg" alt="authorization-ui" title="authorization modal in swagger ui showing available scopes and endpoints" width="800" height="735"></a></p>
<p>Check the boxes for the scopes you’d like to request, then click Authorize. This should redirect you over to B2C where you can sign in. This policy in my B2C tenant has some UI customization applied:</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--LhRxpHyN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/06-ui2.jpg" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--LhRxpHyN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/06-ui2.jpg" alt="b2c-signin" title="authorization modal in swagger ui showing available scopes and endpoints" width="800" height="867"></a></p>
<p>Once we signin, we get redirected back to our SwaggerUI, token in-tow:</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--poMb-0Fo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/07-ui3.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--poMb-0Fo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/07-ui3.png" alt="swagger-ui-token-received" title="authorization modal in swagger ui showing authorized" width="800" height="516"></a></p>
<p>Now when we try out one of our API calls, SwaggerUI handles tacking on the bearer header for us:</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--g1qjJhuK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/08-ui4.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--g1qjJhuK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/08-ui4.png" alt="swagger-ui-token-received" title="authorized call" width="800" height="323"></a></p>
<p>And finally, if we go take a look at the token we received at <a href="proxy.php?url=https://jwt.ms">jwt.ms</a>, you’ll note the scopes we requested in the SwaggerUI modal are also available:</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--q7Sf2URN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/09-token.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--q7Sf2URN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/09-token.png" alt="swagger-ui-token-received" title="token in jwt.ms" width="800" height="902"></a></p>
<p>:bowtie:</p>
<p>Find me at <a href="proxy.php?url=https://twitter.com/AzureAndChill">@AzureAndChill</a> with any questions or concerns!</p>
azureb2cnswagapiAutomating non-RBAC AKS and kubectl with Azure AD service principalsJohn Patrick DandisonTue, 19 Nov 2019 20:08:17 +0000
https://dev.to/jpda/automating-non-rbac-aks-and-kubectl-with-azure-ad-service-principals-4km6
https://dev.to/jpda/automating-non-rbac-aks-and-kubectl-with-azure-ad-service-principals-4km6<p>This came across my desk this morning and it happens frequently enough that I figured it worth writing about (write once, refer frequently!). Cloud services are made for automation from the ground up - automation is how you reach maximum efficiency of both the resources you create in addition to the people creating & managing them. Of course, with automation, we still need security around who/what can login and manipulate those resources. Today we’re going to look at using the Azure CLI with service principals & <code>kubectl</code>.</p>
<h2>
Why not user accounts?
</h2>
<p>If we take a trip back in time, when people <em>gasp!</em> deployed and managed servers in their own datacenters, we’d create accounts in Active Directory or wherever and use them as service accounts. These service accounts were typically treated differently (e.g., with different policies, or different management attitudes) and used for servers, services and applications to get access to other resources. Your SQL Server might have its own domain account, your IIS appPools would run under a domain account, all sorts of things. But until late in the 00s, with the introduction of managed service accounts, these ‘service accounts’ were largely just ‘accounts’ that were used for services. They were not really any different from the account you or I might use to login to our PCs.</p>
<p>Once we transitioned to Azure AD, these types of habits continued.</p>
<blockquote>
<p><strong>SCENE</strong> A non-descript office building somewhere in the world.<br><br>
<em>(USER sits at large table. As USER sits, TABLE reveals itself to have a large touchscreen built in. “SUR40” flashes across the screen.)</em><br><br>
<strong>USER</strong>. OK, apps team needs a ‘service account’ for deploying their app in Azure…, hmm…<br><br>
<em>(USER flips through stack of post-it notes. We see various URLs, like manage.windowsazure.com, portal.azure.com, aad.portal.azure.com, account.activedirectory.windowsazure.com, etc.)</em><br><br>
<strong>USER</strong>. Ah! Here it is.<br><br>
<em>(USER opens web browser on TABLE. A Silverlight logo briefly appears, followed by a bugcheck. USER, defeated, pulls Surface Laptop out of bag and places it on TABLE.)</em><br><br>
<strong>USER</strong>. I need to get the details of this ticket out of my email.<br><br>
<em>(After a lengthy pause, Outlook opens, then quits. USER grumbles indecipherably about Access, Exchange and JET databases before deciding to go straight to the ticketing system.)</em><br><br>
<strong>USER</strong> (Calling to someone off stage). What’s our TF Service URL again?<br><br>
<strong>VOICE OFF STAGE</strong>. I’ll send it to you on our Teams team. It’s contoso.visualstudio.com. Make sure you build an ARM template for anything you need to deploy. It’s called azure devops now by the way…<br><br>
<strong>USER</strong>. Thanks. I don’t have vscode on my laptop so I’ll have to use VS Online for this. OK, new service account - let’s go to Users, Add New..<br><br>
<strong>USER</strong> (to self). It’s called Azure <strong>Active Directory</strong> , so surely it must be the same, right? Microsoft wouldn’t give multiple disparate things the same name or rename the same thing every six months, right?</p>
</blockquote>
<p>Using user accounts for automation is a Bad Idea™ and we should strongly consider otherwise. There are many reasons, but here are the first ones that come to mind:</p>
<ul>
<li>Process - how do users on/offboard at your company? How frequently are passwords changed? Do you want to be on the hook for updating <em>n</em> services every time you need a password change or reset?</li>
<li>Availability - similar to above, what happens when your password expires and you’re on vacation? Do all of your services go offline?</li>
<li>Modern authentication pipelines & experiences - things like MFA, conditional access, security keys, passwordless - all of this goes out the door if you’re using a username & password non-interactively.</li>
<li>Security - what kind of services are available to your user account? What kind of lateral movement would an attacker be able to execute with your credentials?</li>
</ul>
<p>Many people have written many words on this topic, so I’ll move on - but the message is clear - <em>do not use user accounts for automation!</em></p>
<h2>
Service Principals
</h2>
<p>So what should we use? Azure AD offers Service Principals - these are effectively ‘service accounts,’ but we have far more control over both the scope of privileges & access granted to the principal, in addition to being able to tightly control both credential and lifecycle. For example - Azure AD SPs can use passwords <em>or</em> certificates for authentication. An organization with a centrally-managed certificate authority can rotate certificates on a schedule, keeping credentials entirely available yet out of the hands of developers. Beyond that, Managed Service Identity offers managed service principals tied to a resource (very much like managed service accounts from AD) where credentials are completely managed by Azure, but the service principal can be assigned permissions & rights just like any other principal.</p>
<h2>
RBAC vs non-RBAC AKS clusters
</h2>
<p>There are two ways to use AKS clusters in Azure - with or without Azure AD integration, usually referred to as ‘RBAC-enabled’ in most of the docs. The key difference here is related to the management vs. data planes of the Azure resource, in this case the AKS cluster. For example, think about an Azure VM. I may have <em>management</em> rights to deploy/restart/change the VM’s Azure configuration (disks, networking, resource group, etc), but may not have an account to RDP to the VM and make any changes there (the <em>data</em> surface). Similarly, I may have the ability to <em>manage</em> a storage account or SQL DB - change properties, move to different groups, add data sync partner regions, etc - but I may not have access to the data within those resources.</p>
<p>In a non-RBAC cluster, we have <em>management</em> rights within the Azure portal to manage the resource, including getting credentials for kubectl. The difference between this route and the RBAC route is the type of credential - in a non-RBAC scenario, I’m using a service principal to fetch/generate a credential for AKS, but the credential itself is disconnected from the service principal. It is merely access to <em>generate a credential</em> for the management interface. In an RBAC or Azure AD-enlightened cluster, not only will my service principal potentially be used to <em>manage</em> the cluster, but I can also connect to the cluster <em>as the service principal</em> which offers a greater level of granularity to the types of management operations allowed <em>within the cluster itself</em>, e.g., roles that are more granular than what is populated within the Azure management surface.</p>
<p>For an RBAC cluster, check the docs <a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/aks/azure-ad-rbac">here</a> instead.</p>
<p>In our specific scenario, let’s look at how we can create a service principal for use with a non-RBAC-enabled aks cluster & kubectl. This is fetching credentials, but <em>not</em> connecting <em>as the service principal</em>.</p>
<h2>
Creating the service principal
</h2>
<p>This is pretty straightforward, if you have permission within your Azure AD tenant. In most cases you will. If you don’t, you’ll have to talk to your admin 😒 Get logged into your <code>az</code> cli:</p>
<p><code>az login</code></p>
<p>Once we’re logged in, hit it with this (full documentation with a ton of options for creating these is <a href="proxy.php?url=https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli?view=azure-cli-latest">here</a>):</p>
<p><code>az ad sp create-for-rbac --name "my-aks-automation-sp-name"</code></p>
<p>Note that this SP is different than the one you configured when you created your aks cluster. This is a general purpose SP that you can use for all sorts of things (although I’d suggest using just one per task, with granular permissions). They’re free, so create as many as you want.</p>
<p>The output of <code>az ad sp create-for-rbac</code> will give you a chunk of info - notably, your new service principal’s ID (shown as <code>appId</code> in the response). If you go the password route (e.g., you didn’t include a certificate), you’ll also get a <code>password</code> and the name of the Azure AD tenant the SP was created within.</p>
<h2>
Hooking our SP up with some permissions
</h2>
<p>Next we want to grant our SP some rights to resources. You can do this in the portal or CLI. In the portal, go to your resource - in our case, your AKS managed cluster. You only have to assign the rights once so no one will judge you for doing it from the portal. You can also dig around to find the right role IDs and assignment scopes, but there are lots of docs for that. If you have a lot of clusters it may be worth that effort. I’ll do a follow up if anyone asks.</p>
<p>Once you find your AKS cluster, click Access Control (IAM) and add a role assignment. Search for your SP (the appId field will work, or the name as listed in the output of the <code>az ad sp</code> command earlier). Find an appropriate role. In my case, Azure Kubernetes Service Cluster Admin Role is what I was looking for. Select that, then make sure you Save/Add the role assignment.</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--4CyX7w7_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/aks-sp-portal-role-assignment.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--4CyX7w7_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/aks-sp-portal-role-assignment.png" alt="portal-role-assignment" title="portal role assignment" width="608" height="409"></a></p>
<h2>
Logging into the CLI with your newly minted SP
</h2>
<p>Now that our roles are assigned, let’s jump back to the CLI. You can do this in your current CLI but, of course, this can be made a part of your deployment scripts through your deployment tool of choice - if it can run az-cli, it can be automated.</p>
<p>If you want to be absolutely sure it’s working, clear all existing credentials first using <code>az account clear</code>. Once we’ve done that, let’s get logged into the CLI with your SP. All of the required info for this came out of the output of the <code>az ad sp</code> command.</p>
<p><code>az login --service-principal --username "your-service-principal-appId" --password "your-service-principal-password" --tenant "your-service-principal-tenant.whatever"</code></p>
<p>Of course, if you used a certificate you’ll need to make sure your cert is available to the cli and the command switches will be a bit different. Refer <a href="proxy.php?url=https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli?view=azure-cli-latest">here</a>. You can ensure your login was correct by checking the output - under user.type, you’ll see <code>servicePrincipal</code>. You can get back to this at any time with <code>az account list.</code></p>
<h2>
<code>kubectl</code> as an SP
</h2>
<p>Lastly, we need to get our aks credentials for kubectl:</p>
<p><code>az aks get-credentials -g *your-resource-group-name* -n *your-aks-cluster-name*</code></p>
<p>Make sure it all works with <code>kubectl get nodes</code></p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--KNqpveH2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/aks-sp-cli-kubectl.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--KNqpveH2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/aks-sp-cli-kubectl.png" alt="kubectl" title="cli" width="800" height="240"></a></p>
<p>:bowtie:</p>
<p>happy automating!</p>
aksk8srbacazureRetrofitting OIDC to legacy systems via reverse proxyJohn Patrick DandisonWed, 30 Oct 2019 08:24:18 +0000
https://dev.to/jpda/retrofitting-oidc-to-legacy-systems-via-reverse-proxy-2cmn
https://dev.to/jpda/retrofitting-oidc-to-legacy-systems-via-reverse-proxy-2cmn<p>Obviously we want systems that use modern authentication. In the case of Azure AD, you get <em>the good stuff</em>, like <a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/overview">Conditional Access</a> and a whole host of new authentication mechanisms, like <a href="proxy.php?url=https://docs.microsoft.com/en-us/windows/security/identity-protection/hello-for-business/hello-overview">Windows Hello</a> & <a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-authentication-passwordless">FIDO2</a>.</p>
<p>This is a big jump, however, especially for old apps. And by old, I mean <em>old</em> - running on unsupported or deprecated platforms (or versions of those platforms), where the skillset doesn’t exist in your circle to update it, or where the vendor who created the platform has ceased to exist. Consider too that some legacy systems may be on-deck for total replacement, either with custom dev or off-the-shelf systems, where we want to keep costs as low as possible.</p>
<p>Like most modernization projects, there are different ways to approach the problem - in this case, I’m focused on the ‘moat-building’ method, where we dig out and isolate the problem systems and proxy access through more modern systems. This is a bit quick-and-dirty, but for systems that are slated for replacement or just can’t be touched, it’s a low-impact way to wrap new functionality around an old problem. You can apply this to all sorts of parts of a system as well - e.g., extracting and shipping data out of an old system into a new system and schema while slowly moving referencing bits over to the new system.</p>
<p>Let’s dig in.</p>
<h2>
Reverse proxying
</h2>
<p>Since we’ve got a web app and we want to add only authentication, it’s relatively straightforward. We need to:</p>
<ul>
<li>Authenticate the user, using a typical oidc-tango</li>
<li>Redirect to identity provider</li>
<li>Consume and validate issued token</li>
<li>Read claim data</li>
<li>Transform some claim data before forwarding along</li>
<li>In our specific case, we need specific values from the claims to be forwarded in a specific header</li>
</ul>
<h2>
Layout
</h2>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--9S2NU4Qm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/apache-jwt-00.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--9S2NU4Qm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/apache-jwt-00.png" alt="layout" title="layout" width="800" height="467"></a></p>
<p>Our layout is straightforward. We have our original old app (we’ll call it <code>old-app.example.com</code>) and our front-end (<code>myapp.example.com</code>). We’ll need to make sure DNS resolves on the Apache host for <code>old-app.example.com</code> and that our old app server is configured for that host header or to accept all names. If it’s not in your local DNS, you could add it to <code>/etc/hosts</code> for local resolution.</p>
<p>Next we’ll want to make sure <code>myapp.example.com</code> resolves to our Apache server. You would likely want to use named virtual hosts here, although you could get away with <code>*</code> if you wanted. My virtual host on <code>:80</code> is just redirecting to <code>:443</code> - yours may need to do something different.</p>
<p>Lastly you’ll want to make sure you have any TLS certs available and installed on the Apache host.</p>
<p>I’m using Apache 2.4.* on Ubuntu 18.04, with binaries for <code>mod_auth_openidc</code> 2.3.3, which is available in the universe repositories.</p>
<h2>
Considerations
</h2>
<p>Since we’re usurping the user path to the app, we’ll need to make sure we manipulate the network environment & DNS in a way to make the app either inaccessible or unusable if someone was to hit it directly. If hosted in Azure, this could be something like a two-subnet VNet, one subnet with the app and the other with the proxy, with NSGs locking down the app subnet to only allow traffic from the proxy subnet. On-prem there are myraid ways to segment networks and restrict access.</p>
<p>You’ll also want to host with TLS, Azure AD reply URLs will require <code>https</code> except in the case of localhost.</p>
<h2>
Apache config
</h2>
<p>For apache, we’re going to use <a href="proxy.php?url=https://github.com/zmartzone/mod_auth_openidc"><code>mod_auth_openidc</code></a> which is an <a href="proxy.php?url=https://openid.net/certification/">OIDC-compliant</a> relying party/client module for OpenID Connect.</p>
<p>Let’s take a look at the config.</p>
<h2>
Results
</h2>
<p>The <code>mod_auth_openidc</code> package includes all the claims as passthrough headers, in addition to our custom header with our transformed value. My source claim in this case was <code>preferred_username</code>, which we transformed via apache to <code>X-jpda-header-loc</code>.</p>
<p>The advantage to this method is potentially <em>many</em> apps could live behind this proxy, with very little additional effort to onboard more. Of course the tradeoff with proxying is a single choke point for traffic, so carefully consider which apps should be grouped behind specific instances.</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--YfuE9yb6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/apache-jwt-01.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--YfuE9yb6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/apache-jwt-01.png" alt="claims" title="claims" width="800" height="1073"></a></p>
oidcidentityazureadPrivate App Service-to-App Service calls in multitenant PaaSJohn Patrick DandisonFri, 14 Jun 2019 18:11:10 +0000
https://dev.to/jpda/private-app-service-to-app-service-calls-in-multitenant-paas-51g8
https://dev.to/jpda/private-app-service-to-app-service-calls-in-multitenant-paas-51g8<p>Recently, <a href="proxy.php?url=https://twitter.com/shankuehn">Shannon</a> & I got an interesting request from a customer who is using multi-tenant App Services (e.g., non-ASE), but wanted to keep communications restricted to their virtual network. If you’re familiar with Azure App Services, typically the way to achieve this is via an <a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/app-service/environment/intro">App Service Environment</a> (ASE). ASEs are single-tenant, dedicated nodes running the App Service stack <em>in your virtual network</em> - this flips the ‘private’ bit as they have RFC 1918 IPs from within your VNet. This means you can do pretty much whatever you want with networking - use 17 firewalls in front of it, access vnet or on-prem resources, whatever. Problem is, they can get a bit pricey, at least relative to multi-tenant App Service. Internal ASEs can have a bit of deployment drama too, since you need to manage DNS and get wildcard certificates for your ASE (wildcard certs being particularly difficult to get from security & ops teams). Fortunately, with some (very) recent additions, this is possible using a combination of <a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/app-service/web-sites-integrate-with-vnet">‘New’ VNET Integration (gateway-less)</a> and <a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-service-endpoints-overview">Service Endpoints</a>/<a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/app-service/app-service-ip-restrictions">IP Restrictions</a> in Azure. Let’s dig in.</p>
<h2>
Layout
</h2>
<p>We’ve got:</p>
<ul>
<li>
<code>jpd-app-1</code>, a web app (API) with it’s own App Service Plan</li>
<li>
<code>jpd-app-2</code>, another web app (API) with it’s own App Service Plan</li>
<li>a vnet, <code>vnet1</code>, with three subnets, <code>jpd-app-1-subnet</code>, <code>jpd-app-2-subnet</code> and <code>default</code>
</li>
<li>a VM for testing, in the <code>default</code> subnet</li>
<li>The two apps & vnet live in the same region</li>
</ul>
<p>We need <code>jpd-app-1</code> and <code>jpd-app-2</code> to talk to each other and also be accessible from the VNet, but not from the internet at all.</p>
<h2>
Integrating with the VNet
</h2>
<p>First we need to get integrated to the VNet from each of our app services. We need to keep the app service integrations in their own subnets, so we can delegate the subnet to the App Service. First we need to register the <code>Microsoft.Web</code> service endpoint. If you’re creating a new VNet, you can do this at creation time. If not, it’s easy to enable afterward.</p>
<p>During creation:</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--Cp9C5qdY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/Annotation%25202019-06-14%2520141341.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--Cp9C5qdY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/Annotation%25202019-06-14%2520141341.png" alt="service endpoints" width="415" height="586"></a></p>
<p>After creation (from the VNet –> Service Endpoints pane). Enable it on all the subnets you want to access our App Services from.</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--No-MrJlv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/vnet-service-endpoints.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--No-MrJlv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/vnet-service-endpoints.png" alt="service endpoints" width="610" height="348"></a></p>
<p>Once you’ve enabled them, head to the Networking pane of the first App Service (<code>jpd-app-1</code> in my example).</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--jwI_eEye--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/jpd-app-1-networking.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--jwI_eEye--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/jpd-app-1-networking.png" alt="networking pane" width="800" height="816"></a></p>
<p>Configure VNet Integration, then choose Add VNet (Preview). Choose your vnet, then your subnet.</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--VZ6porGH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/jpd-app-1-networking-add.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--VZ6porGH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/jpd-app-1-networking-add.png" alt="add feature" width="800" height="368"></a></p>
<p>Do this for both of your App Services, integrating each into their respective subnets. At this point, our app services are now connected to <code>vnet1</code> - if we had resources we needed to access in the VNet, like a VM, a SQL Managed Instance or even something on-prem via site-to-site VPN, we’d be able to by this point. The app services are exposed to the internet, however. We’ve enabled access <em>from</em> the App Services to the VNet, but we haven’t restricted access <em>to</em> the App Services to only the VNet just yet. For that we’ll use Access Restrictions.</p>
<h2>
Access Restrictions
</h2>
<p>Next we need to enforce the access restrictions. Back in the networking pane of your App Service, you’ll see ‘Access Restrictions.’ This is where we’ll add our vnets. By default, your App Service will allow all traffic from all sources. You can leave the default Allow All rule, since as soon as we add our own restrictions, the Allow All rule becomes a Deny All, with our rules taking priority.</p>
<p>You’ll also note there are <em>two</em> hosts listed - <code><yourapp>.azurewebsites.net</code> and <code><yourapp>.scm.azurewebsites.net</code> - the first one is your app, the second one (<em>.scm.</em>) is the backstage view of your web app, the Kudu console. You can restrict these independently. Make sure you add restrictions for both the main site and Kudu/scm site if you don’t want any access from the internet!</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--Bo_nyKU7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/jpd-app-1-access-kudu.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--Bo_nyKU7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/jpd-app-1-access-kudu.png" alt="sites" width="673" height="100"></a></p>
<p>Now let’s add our restriction! Since this is <code>jpd-app-1</code> and it’s integrated into the <code>jpd-app-1-subnet</code> subnet, we need to add two rules:</p>
<ul>
<li>Access from <code>jpd-app-2-subnet</code> subnet</li>
<li>Access from <code>default</code> subnet</li>
<li>Deny everything else</li>
</ul>
<p>First the <code>jpd-app-2-subnet</code> rule. Make sure you choose ‘Virtual Network’ in the type. Then choose your vnet (<code>vnet1</code>) and your subnet for the other app that needs access (<code>jpd-app-2-subnet</code>). Repeat for (<code>default</code>) to enable the rest of the vnet and any other vnets/subnets that may need access.</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--Q1qLDdpm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/jpd-app-1-access-add-app-2.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--Q1qLDdpm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/jpd-app-1-access-add-app-2.png" alt="app2" width="466" height="805"></a></p>
<h2>
Repeat for the other app
</h2>
<p>In your other app (<code>jpd-app-2</code>), do the same thing. Add access restrictions, only this time choose the subnet of the other app (<code>jpd-app-1-subnet</code>).</p>
<h2>
Testing
</h2>
<p>If you go to your app in a browser from your local machine, you should get a 403, ‘web site stopped.’ This is the experience for app services that are restricted - it’s a 403 Forbidden, stopped is a bit misleading here.</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--252gfBAy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/jpd-app-1-stopped.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--252gfBAy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/jpd-app-1-stopped.png" alt="stopped" width="800" height="552"></a></p>
<p>From your web app’s kudu console (<code><yourapp>.scm.azurewebsites.net</code>), which is either accessible only from the VM on your vnet, or from the internet, if you didn’t add a restriction, we can do a curl to the other app to make sure our configuration is all square. <code>tcpping</code> isn’t valid here, because the site does respond to ping - as it is available on the internet, but returning <code>403</code>.</p>
<p><code>curl -sI https://<yourapp>.azurewebsites.net</code></p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--CYOlQR1u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/jpd-app-1-curl.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--CYOlQR1u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/jpd-app-1-curl.png" alt="app-1-curl" width="800" height="359"></a></p>
<p>This should return some HTML/markup. If it’s a <code>403</code>, something is wrong. If you get a <code>200/OK</code>, you’re in business!</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--sCa73nnq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/jpd-app-1-to-2-success.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--sCa73nnq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/jpd-app-1-to-2-success.png" alt="app-1-to-app-2" width="800" height="524"></a></p>
<p><em>Note: you may notice that my URLs changed near the end - I added a bad rule so I had to trash the two app services and recreate them. <strong>Don’t put zero (0) as a priority for a rule!</strong></em></p>
<p>I haven’t tried this with Functions on a consumption plan yet, but that’s coming next. Stay tuned!</p>
appserviceazurepaascloudGeo-replicating Azure Service Bus with guaranteed orderingJohn Patrick DandisonTue, 23 Oct 2018 14:20:35 +0000
https://dev.to/jpda/geo-replicating-azure-service-bus-with-guaranteed-ordering-19bd
https://dev.to/jpda/geo-replicating-azure-service-bus-with-guaranteed-ordering-19bd<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--UJwGxD-9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/0_eAcdrF5a1HTPad7v.jpg" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--UJwGxD-9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/0_eAcdrF5a1HTPad7v.jpg" alt="Photo by [Slava Bowman](https://unsplash.com/@slavab)" width="800" height="518"></a><em>Photo by <a href="proxy.php?url=https://unsplash.com/@slavab?utm_source=medium&utm_medium=referral">Slava Bowman</a></em></p>
<p>Recently, I had an interesting question come across my desk from a customer:</p>
<blockquote>
<p>How do we ensure Service Bus message ordering <strong>and</strong> get active/active geographic availability?</p>
</blockquote>
<p>We’ve solved parts of this in numerous ways over the years — Service Bus supports <a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-async-messaging#paired-namespaces">paired namespaces</a> — but with a list of caveats that include out-of-order messages. It’s also a question of <em>send</em> availability vs. <em>receive</em> availability. We want our producers to always be able to send messages — that means our receivers/consumers will need to be smarter, but we’ll get into that in a bit.</p>
<p>In our specific scenario, global message ordering is of lesser importance than transactional or scoped message order — consider 10 transactions, 0, 1, 4, and 5 are related, 2, 3, and 6 are related, and 7, 8 and 9 are related. Those three batches can be processed in any order, but the messages <em>within</em> the specific batch or scope need to be processed in order. We see this issue frequently when customers are integrating with legacy systems or mainframes.</p>
<p>For a more concrete example, let’s say I’m tracking a lot of shipments. I have many shipments and each shipment has a collection of statuses. Those statuses are serial — a package destined for Charlotte from Seattle probably wouldn’t end up in Des Moines before departing Seattle.</p>
<p>Here’s some data:</p>
<p>However, if I have multiple packages, I don’t really care if package A or package B has its statuses updated in my backend system first, as long as the order of the status messages for each specific package stays consistent.</p>
<p>Another example — change-data-capture out of a database system. If I run an INSERT after a DELETE, even though the DELETE happened first, my data isn’t going to be correct.</p>
<p>Back to our shipment tracking scenario. First we need a way to ensure ordering of messages within our Service Bus queue or topic. We’ll use <a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/service-bus-messaging/message-sessions">Message Sessions</a> for that. Message Sessions have some other tradeoffs, however — a session ID is effectively a partition key. It means that data with that key is stickied to a particular data store. It makes sense when you think about it; messages distributed across multiple data stores would require a lot more processing server-side to collect and order.</p>
<p>In-order processing implies some other tradeoffs too — by definition, messages are processed in order, serially, by a single consumer at a time— no processing parallelism here (well, at least not within our individual sessions — we’ll get there soon).</p>
<p>Messages within a session are FIFO — but not necessarily our sessions themselves. We may have many sessions to process, which we would want to process concurrently. Remember, we only care about order guarantees <em>within the scope</em> of our session/shipment. This way we can still ensure timely message processing without impacting our ordering requirements.</p>
<p>In a single Azure region this is no big deal. We have our Service Bus namespace in, say, East US, and our producers and consumers live in the same region. Full-region failures in Azure are certainly quite rare, but not completely unheard of. Service degradation or service-specific outages are still a concern too, along with dependent services (e.g., when a storage outage causes cascading failures of other services within a region). For our customer, they’re deployed into two regions in the US, and App Service, SQL DB, storage, etc are all geographically replicable in one way or another (or stateless). Service Bus was the one thing we didn’t have a compelling answer for, so we decided to build them a way to achieve it.</p>
<p>Let’s start with the code — it’s here: <a href="proxy.php?url=https://github.com/jpda/azure-service-bus-active-rep">https://github.com/jpda/azure-service-bus-active-rep</a> — we’re going to use:</p>
<ul>
<li>Two Service Bus namespaces, in different regions.</li>
<li>A <a href="proxy.php?url=https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels#consistency-levels">strongly-consistent Cosmos DB</a> collection, as a message journal</li>
<li>Message Sessions in our Service Bus queues</li>
</ul>
<h3>
Send Availability
</h3>
<p>To maintain send availability, we’re going to setup two independent Service Bus namespaces and create a session-enabled queue within each namespace. You could do this with Topics as well, if your case requires that.</p>
<p>Each sender will mark the session with some sort of deterministic ID or domain unique key. In our shipping scenario, a tracking number would be an excellent session key. It could also be some other unique ID that represents the scope of the messages being sent — customer ID, transaction ID, etc. Your sender should send the message to <strong>both</strong> queues, with identical session and message IDs.</p>
<p>In practice, the Session ID represents the ID of the entity you would like to ‘lock’ — because the consumers will attempt to record their work items in the Cosmos DB collection using the entity ID (session or message) as the document ID, if there is a document ID collision, the consumer will receive an exception and refetch the entity to get the latest status before beginning processing. We’ll dig into this below.</p>
<p>You’ll also notice in the Sender that we’re setting the ScheduledEnqueueTimeUtc to now + 15 seconds on one of the queues. We’ll address this later but it’s primarily to reduce churn and prefer a ‘primary’ region over another.</p>
<p><a href="proxy.php?url=https://github.com/jpda/azure-service-bus-active-rep/tree/master/azure-service-bus-active-sender">https://github.com/jpda/azure-service-bus-active-rep/tree/master/azure-service-bus-active-sender</a></p>
<h3>
Receive Availability
</h3>
<p>Receive availability gets tricky when you have multiple copies of the same message, especially in an integration case where you don’t have full control over the final destination of the data. In our case, we’re interacting with a mainframe (among other systems) and don’t want to add undue stress by deduplicating messages via querying the mainframe.</p>
<p>Instead, our consumers follow a fairly simple (albeit chatty) pattern:</p>
<ul>
<li>Open a MessageSession</li>
<li>Create new document with SessionId or SessionId_MessageId as the Document ID</li>
<li>If we succeed at creating the document, we then update the status in the document (e.g., Status: Working)</li>
<li>If we get a 409 (Conflict), we pull the latest version of the document from Cosmos and check status</li>
<li>If the status is Pending, we update the status to Working, attempt to write again, wait for 200</li>
<li>If we succeed, then we start processing the message</li>
<li>When we’re complete, we update the document again and Complete the Service Bus message.</li>
</ul>
<p>That’s a lot so we’ll break it down a bit.</p>
<ul>
<li>First we start receiving a <code>MessageSession</code>. This locks/hides all other messages with that <code>SessionID</code> in the specific queue, so we can be sure this consumer will process them in the order received.</li>
<li>Before we start processing the message, we record in our journal (our Cosmos DB collection) the session ID or session/message ID composite. Because we’re using this as our document ID, if there is a conflict (e.g., another consumer has already created the document), Cosmos will return a <code>409 Conflict</code> — at which point we know another consumer has started reading the session in one of our queues.</li>
<li>If that journal write is successful, we then attempt to update the document — updating a field called Status to Pending or Working.</li>
<li>This update follows the same rules; if the document we have updated is older than what is in the database, Cosmos returns an error code — at which point we fetch the latest version, check status and decide from there.</li>
<li>If the update to status succeeds (e.g., we’ve updated the status to Working) we can start to work.</li>
<li>These two operations being discrete (instead of as a single operation) means we keep the window for changes small.</li>
<li>When we’re done processing the message, we can Complete the message to remove it from the queue and begin processing the next.</li>
<li>The other consumer in the secondary region will pull the document, see the status as ‘Working’ and Abandon the message; abandoning here will cause the message to unlock, and the next consumer will attempt to pick it up. When a consumer reads the message, checks status and sees a status of Completed, the message in the secondary queue will be Completed(note, in the code sample today, the consumers are not abandoning the messages, they are looping with a sleep/wait to recheck message status).</li>
</ul>
<p>In a failure case, where one of the consumers has failed, we have two stages of failure:</p>
<p>The processing logic in the consumer has died, but not the process (e.g., the process hosting the processing logic is still alive) — in this case, our processor could attempt to reprocess, or in the case of an irrevocable failure, update the journal status for that message or session to Failed or Faulted, and move the message to the dead letter queue for manual remediation.</p>
<p>If the consumer has died completely (as in, the process is completely dead), we can use <code>MessageWaitTimeout</code> on the <code>SessionHandlerOptions</code> we use to configure the <code>SessionHandler</code> to set a reasonable timeout. Once that timeout duration has passed, the session is unlocked and our next consumer will pick up the session, check status in Cosmos and continue processing.</p>
<p><a href="proxy.php?url=https://github.com/jpda/azure-service-bus-active-rep/blob/master/azure-service-bus-active-receiver-lib/DataReceiver.cs">https://github.com/jpda/azure-service-bus-active-rep/blob/master/azure-service-bus-active-receiver-lib/DataReceiver.cs</a></p>
<h3>
Entity Locks
</h3>
<p>Your choice of entity lock has some implications. If you choose to lock at the Session level, your secondary receivers will abandon the <em>session</em>, at which point your primary consumer is expected to process the entire session. The risk here is you may need to manage which messages have been processed in the session in case of a failure — e.g., Session ID 1 has messages A B C and D. A and B process correctly, but C causes the consumer to die; the secondary consumer will need to either reprocess <em>all</em> messages in its copy of the session (as it doesn’t know which messages have been processed), supporting at-least-once delivery, or it needs a message processing log to ensure it doesn’t process a message a second time.</p>
<p>If you go the session + message composite ID route, where you’re logging each message in a session, you’ll be able to keep your two queues in sync both at the session level and the message level. As messages change state within the primary queue, as the secondary processors pick up that change, they’ll dispose of the messages to mirror that (e.g., Session 1 Message A, primary has completed, journal updated, when Session 1 Message A secondary checks Cosmos with that ID and sees it is completed, it will complete the secondary message). The risk here is a potential out-of-order case where the secondary gets ahead of the primary, because of an unknown failure with the journal. I haven’t figured out a case where this would happen, as the messages should be in the same order in both queues, but theoretically:</p>
<ul>
<li>
<strong>Primary</strong> Message 1 → Cosmos record written → Processing beginning</li>
<li>
<strong>Secondary</strong> Message 2 → No Cosmos record written → Processing beginning</li>
</ul>
<h3>
Load balancing
</h3>
<p>In our scenario, we have a set of legacy systems, some are single instance, some are on-premises. We’d prefer the majority of our traffic go through a single ‘primary’ Azure region, closest to our on-premises systems, but failover to a different region in case of a region fault.</p>
<p>In addition, this model is similar to products we’re already using (like SQL DB geo-failover), so we’d already be following this primary/secondary type of pattern in any case.</p>
<p>That said, this same pattern could be used for processing messages in any region, by any consumer, for potentially greater scale and throughput.</p>
<h3>
Additional notes + thoughts
</h3>
<p>As we said earlier, the <em>senders</em> here are dictating which entity to lock — if, for example, we want to allow individual messages to be locked (vs an entire session), the sender can use a completely unique or more granular session ID (perhaps a composite, like <code>PackageId_ShipmentStatusId</code>), which would be reflected in the journal. In this case, our primary and secondary consumers could consume the session independently in each queue. The receivers don’t care what entity is being locked, as long as the entity locking (and by extension, the entity ID generation scheme) is consistent.</p>
<p>We’re writing messages with <code>ScheduledEnqueueTimeUtc</code> to the secondary queue with a 15 second delay, primarily to prefer our primary region over the secondary. Provided everything is operating normally, this should provide ample time for the primary set of consumers to receive, check and record work items before the message even appears in the secondary queue.</p>
<p>This is a work in-progress, so feedback is welcome.</p>
servicebusazurepatternsqueuesGeneric ListAdapter for Xamarin.Android RecyclerViewJohn Patrick DandisonThu, 09 Aug 2018 16:23:14 +0000
https://dev.to/jpda/generic-listadapter-for-xamarin-android-recyclerview-2g95
https://dev.to/jpda/generic-listadapter-for-xamarin-android-recyclerview-2g95<p>I’ll preface this to say I know literally nothing about Android (or mobile, in general) development. I’ve used Xamarin for about 7 hours now and I think it’s neat, but my only goal is to get up a stub of an app to act as a client for the real work, which is a bunch of Azure stuff. This might be useful to someone or it could be a complete <a href="proxy.php?url=https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect">Dunning-Kruger</a> moment for me, only time will tell. Proceed with caution.</p>
<p>After reading a bit about ListViews I found out about <a href="proxy.php?url=https://blog.xamarin.com/recyclerview-highly-optimized-collections-for-android-apps/">RecyclerView</a>, which has some cool viewport management stuff to reclaim memory for items you’ve scrolled past while queuing up new items about to come into viewport.</p>
<p><a href="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--ExMcgTK7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/1%2520__rNlpuO__%2520byTAYObhua4IN9w.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://res.cloudinary.com/practicaldev/image/fetch/s--ExMcgTK7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://jpda.dev/img/1%2520__rNlpuO__%2520byTAYObhua4IN9w.png" alt="Image from Xamarin blog post [here](https://blog.xamarin.com/recyclerview-highly-optimized-collections-for-android-apps/)" width="" height=""></a>Image from Xamarin blog post <a href="proxy.php?url=https://blog.xamarin.com/recyclerview-highly-optimized-collections-for-android-apps/">here</a></p>
<p>Anyway, I have two collection types for my stub app, Contacts and Conversations. It’s pretty straightforward, but as I was working through the adapter → viewholder stuff, it seems duplicative.</p>
<h3>
Let’s start
</h3>
<p>I went though the Xamarin blog post, adapting what was there to my needs and types. I’ve got:</p>
<ul>
<li>
<a href="proxy.php?url=https://gist.github.com/jpda/4ccc9cf61210970753925262eca42954#file-contact-cs">Contact</a>— my actual entity, in this case <code>Name</code> and <code>Email</code>
</li>
<li>
<a href="proxy.php?url=https://gist.github.com/jpda/4ccc9cf61210970753925262eca42954#file-contactlistrow-axml">ContactsListRow</a>— the axml item template markup</li>
<li>
<a href="proxy.php?url=https://gist.github.com/jpda/4ccc9cf61210970753925262eca42954#file-contactsadapter-cs">ContactsAdapter</a>— the adapter that actually inflates the view and binds the data</li>
<li>
<a href="proxy.php?url=https://gist.github.com/jpda/4ccc9cf61210970753925262eca42954#file-contactadapterviewholder-cs">ContactAdapterViewHolder</a>— a holder object that keeps references to the views within the <code>RecyclerView</code> template (e.g., the <code>TextView</code>)</li>
</ul>
<p>As I started replicating this for a different collection, it dawned on me this is actually a lot of the same code.</p>
<p>If we look at the ContactsAdapter, we see a lot of the same thing:</p>
<ul>
<li>A generic collection of our items</li>
<li>Linking a <code>ViewHolder</code> to a layout</li>
<li>Binding an entity item to a <code>ViewHolder</code>
</li>
</ul>
<p>That’s really about it. Take a look:</p>
<p>Notice there’s really not a whole lot of specific stuff in here. Let’s look at the reusable one:</p>
<p>The only specific stuff we need to know about is</p>
<ul>
<li>
<code>T</code> — Type of collection</li>
<li>
<code>V</code> — Type of ViewHolder</li>
<li>
<code>Action<T,V></code> — a method that takes our item and our viewholder and binds them</li>
</ul>
<p>Let’s look at usage — you’ll notice we really just pushed some code around to different places. Here’s our usage in something like MainActivity.cs. Our binding logic has moved to the anonymous method, and we’re explicitly telling our <code>ListAdapter</code> the type of our collection and our <code>ViewHolder</code>.</p>
<p>So far I think Xamarin is pretty neat. I don’t foresee a career shift to mobile development anytime soon but this is at least making it a lot easier! More to come on the Azure bits we’re wiring up here soon.</p>
xamarinandroidlistadapter