<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Sql on staticnotes.org</title>
    <link>/tags/sql/</link>
    <description>Recent content in Sql on staticnotes.org</description>
    <generator>Hugo</generator>
    <language>en-US</language>
    <lastBuildDate>Mon, 01 Jan 0001 00:00:00 +0000</lastBuildDate>
    <atom:link href="/tags/sql/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>TIL how to investigate missing data easily with DuckDB</title>
      <link>/til/2025/05/duckdb-glob_utility_function/</link>
      <pubDate>Fri, 16 May 2025 14:00:00 +0000</pubDate>
      <guid>/til/2025/05/duckdb-glob_utility_function/</guid>
      <description>&lt;p&gt;In data engineering pipelines we commonly store output data in S3 buckets at various stages.&#xA;In this particular case we stored the input data received from customers as csv files in the S3 bucket &lt;code&gt;s3://customer-ingested/customerA.csv&lt;/code&gt; and after some transformations we stored the transformed data as csv files in another S3 bucket &lt;code&gt;s3://customer-transformed/customerA.csv&lt;/code&gt;.&lt;/p&gt;&#xA; &lt;h3 id=&#34;problem&#34;&gt;&#xA;  &lt;a href=&#34;#problem&#34; class=&#34;header-link&#34;&gt;&#xA;    Problem&#xA;  &lt;/a&gt;&#xA;&lt;/h3&gt;&lt;p&gt;For some of the customers the csv file was missing in the first bucket &lt;code&gt;customer-ingested&lt;/code&gt; while we were sure that all customers had a file in &lt;code&gt;customer-transformed&lt;/code&gt;. However, there was a legit reason why the file could be missing which was that in the application the customer had a state of &lt;code&gt;is_deactivated = True&lt;/code&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>TIL that DuckDB now supports Avro files</title>
      <link>/til/2025/03/duckdb-avro-support/</link>
      <pubDate>Wed, 12 Mar 2025 16:00:00 +0000</pubDate>
      <guid>/til/2025/03/duckdb-avro-support/</guid>
      <description>&lt;p&gt;In the data engineering pipelines of my current company we use two main file formats:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;Parquet files for analytical workloads (columnar-storage)&lt;/li&gt;&#xA;&lt;li&gt;Avro files for transactional / event-based messages (row-storage)&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;In &#xA;&lt;a href=&#34;../../posts/duckdb-for-data-scientists/&#34; &#xA;&gt;Querying remote S3 files&#xA;&lt;/a&gt; I wrote about how I use DuckDB to query parquet files stored in S3. Recently, I noticed that DuckDB &#xA;&lt;a href=&#34;https://duckdb.org/2024/12/09/duckdb-avro-extension.html&#34; &#xA;&#xA;    target=&#34;_blank&#34;&#xA;    &gt;started supporting&#xA;    &#xA;        &lt;span style=&#34;white-space: nowrap&#34;&gt;&amp;thinsp;&lt;svg&#xA;            style=&#34;height: 0.7em; width: 0.7em; padding-left: -0.2em;&#34; focusable=&#34;false&#34; data-prefix=&#34;fas&#34; data-icon=&#34;external-link-alt&#34;&#xA;            class=&#34;svg-inline--fa fa-external-link-alt fa-w-16&#34; role=&#34;img&#34; xmlns=&#34;http://www.w3.org/2000/svg&#34;&#xA;            viewBox=&#34;0 0 512 512&#34;&gt;&#xA;            &lt;path fill=&#34;currentColor&#34;&#xA;                d=&#34;M432,320H400a16,16,0,0,0-16,16V448H64V128H208a16,16,0,0,0,16-16V80a16,16,0,0,0-16-16H48A48,48,0,0,0,0,112V464a48,48,0,0,0,48,48H400a48,48,0,0,0,48-48V336A16,16,0,0,0,432,320ZM488,0h-128c-21.37,0-32.05,25.91-17,41l35.73,35.73L135,320.37a24,24,0,0,0,0,34L157.67,377a24,24,0,0,0,34,0L435.28,133.32,471,169c15,15,41,4.5,41-17V24A24,24,0,0,0,488,0Z&#34;&gt;&#xA;            &lt;/path&gt;&#xA;        &lt;/svg&gt;&#xA;    &lt;/span&gt;&#xA;        &#xA;    &#xA;&lt;/a&gt; reading of &#xA;&lt;a href=&#34;https://avro.apache.org/&#34; &#xA;&#xA;    target=&#34;_blank&#34;&#xA;    &gt;Avro&#xA;    &#xA;        &lt;span style=&#34;white-space: nowrap&#34;&gt;&amp;thinsp;&lt;svg&#xA;            style=&#34;height: 0.7em; width: 0.7em; padding-left: -0.2em;&#34; focusable=&#34;false&#34; data-prefix=&#34;fas&#34; data-icon=&#34;external-link-alt&#34;&#xA;            class=&#34;svg-inline--fa fa-external-link-alt fa-w-16&#34; role=&#34;img&#34; xmlns=&#34;http://www.w3.org/2000/svg&#34;&#xA;            viewBox=&#34;0 0 512 512&#34;&gt;&#xA;            &lt;path fill=&#34;currentColor&#34;&#xA;                d=&#34;M432,320H400a16,16,0,0,0-16,16V448H64V128H208a16,16,0,0,0,16-16V80a16,16,0,0,0-16-16H48A48,48,0,0,0,0,112V464a48,48,0,0,0,48,48H400a48,48,0,0,0,48-48V336A16,16,0,0,0,432,320ZM488,0h-128c-21.37,0-32.05,25.91-17,41l35.73,35.73L135,320.37a24,24,0,0,0,0,34L157.67,377a24,24,0,0,0,34,0L435.28,133.32,471,169c15,15,41,4.5,41-17V24A24,24,0,0,0,488,0Z&#34;&gt;&#xA;            &lt;/path&gt;&#xA;        &lt;/svg&gt;&#xA;    &lt;/span&gt;&#xA;        &#xA;    &#xA;&lt;/a&gt; files in the same way.&lt;/p&gt;</description>
    </item>
    <item>
      <title>TIL about the QUALIFY statement in SQL</title>
      <link>/til/sql-qualify/</link>
      <pubDate>Mon, 16 Sep 2024 00:00:00 +0000</pubDate>
      <guid>/til/sql-qualify/</guid>
      <description>&lt;p&gt;Today I came across the &lt;code&gt;QUALIFY&lt;/code&gt; clause which is supported in some SQL dialects.&lt;span class=&#34;sidenote-number&#34;&gt;&lt;small class=&#34;sidenote&#34;&gt; It&amp;rsquo;s not part of the SQL standard but supported by main analytical databases like BigQuery, Snowflake, Oracle, Databricks, DuckDB, etc.&lt;/small&gt;&lt;/span&gt; It&amp;rsquo;s part of &#xA;&lt;a href=&#34;https://en.wikipedia.org/wiki/Snowflake_Inc.&#34; &#xA;&#xA;    target=&#34;_blank&#34;&#xA;    &gt;Snowflake&amp;rsquo;s&#xA;    &#xA;        &lt;span style=&#34;white-space: nowrap&#34;&gt;&amp;thinsp;&lt;svg&#xA;                style=&#34;height: 0.7em; width: 0.7em; margin-left: -0.2em;&#34; focusable=&#34;false&#34; data-prefix=&#34;fas&#34; data-icon=&#34;external-link-alt&#34;&#xA;                class=&#34;svg-inline--fa fa-external-link-alt fa-w-16&#34; role=&#34;img&#34; xmlns=&#34;http://www.w3.org/2000/svg&#34;&#xA;                viewBox=&#34;0 0 640 512&#34;&gt;&#xA;                &lt;path fill=&#34;currentColor&#34;&#xA;                    d=&#34;M640 51.2l-.3 12.2c-28.1 .8-45 15.8-55.8 40.3-25 57.8-103.3 240-155.3 358.6H415l-81.9-193.1c-32.5 63.6-68.3 130-99.2 193.1-.3 .3-15 0-15-.3C172 352.3 122.8 243.4 75.8 133.4 64.4 106.7 26.4 63.4 .2 63.7c0-3.1-.3-10-.3-14.2h161.9v13.9c-19.2 1.1-52.8 13.3-43.3 34.2 21.9 49.7 103.6 240.3 125.6 288.6 15-29.7 57.8-109.2 75.3-142.8-13.9-28.3-58.6-133.9-72.8-160-9.7-17.8-36.1-19.4-55.8-19.7V49.8l142.5 .3v13.1c-19.4 .6-38.1 7.8-29.4 26.1 18.9 40 30.6 68.1 48.1 104.7 5.6-10.8 34.7-69.4 48.1-100.8 8.9-20.6-3.9-28.6-38.6-29.4 .3-3.6 0-10.3 .3-13.6 44.4-.3 111.1-.3 123.1-.6v13.6c-22.5 .8-45.8 12.8-58.1 31.7l-59.2 122.8c6.4 16.1 63.3 142.8 69.2 156.7L559.2 91.8c-8.6-23.1-36.4-28.1-47.2-28.3V49.6l127.8 1.1 .2 .5z&#34;&gt;&#xA;                &lt;/path&gt;&#xA;            &lt;/svg&gt;&#xA;        &lt;/span&gt;&#xA;        &#xA;    &#xA;&lt;/a&gt; dialect which is the data warehouse that I use at work.&#xA;The &lt;code&gt;QUALIFY&lt;/code&gt; statement lets me filter the result of a query based on the result of a window function.&lt;/p&gt;</description>
    </item>
    <item>
      <title>TIL how to set a lower threshold on a column in SQL</title>
      <link>/til/greatest-sql/</link>
      <pubDate>Wed, 07 Aug 2024 00:00:00 +0000</pubDate>
      <guid>/til/greatest-sql/</guid>
      <description>&lt;p&gt;For the last couple of months I came across multiple instances where I wanted to write an SQL query to lower threshold all the values in a particular column in one of our Snowflake tables.&lt;/p&gt;&#xA;&lt;p&gt;I used quite clunky workarounds to do that. It turns out you can use the SQL functions &lt;code&gt;GREATEST&lt;/code&gt; or &lt;code&gt;LEAST&lt;/code&gt; for it.&lt;/p&gt;&#xA; &lt;h3 id=&#34;example&#34;&gt;&#xA;  &lt;a href=&#34;#example&#34; class=&#34;header-link&#34;&gt;&#xA;    Example&#xA;  &lt;/a&gt;&#xA;&lt;/h3&gt;&lt;p&gt;I am creating a temporary table in the first CTE with a column &lt;code&gt;DATA_COLUMN&lt;/code&gt; that has positive and negative numbers.&lt;/p&gt;</description>
    </item>
    <item>
      <title>DuckDB use cases for data scientists: Querying remote S3 files</title>
      <link>/posts/duckdb-for-data-scientists/</link>
      <pubDate>Sun, 25 Feb 2024 00:00:00 +0000</pubDate>
      <guid>/posts/duckdb-for-data-scientists/</guid>
      <description>&lt;p&gt;&#xA;&lt;a href=&#34;https://duckdb.org/&#34; &#xA;&#xA;    target=&#34;_blank&#34;&#xA;    &gt;DuckDB&#xA;    &#xA;        &lt;span style=&#34;white-space: nowrap&#34;&gt;&amp;thinsp;&lt;svg&#xA;            style=&#34;height: 0.7em; width: 0.7em; padding-left: -0.2em;&#34; focusable=&#34;false&#34; data-prefix=&#34;fas&#34; data-icon=&#34;external-link-alt&#34;&#xA;            class=&#34;svg-inline--fa fa-external-link-alt fa-w-16&#34; role=&#34;img&#34; xmlns=&#34;http://www.w3.org/2000/svg&#34;&#xA;            viewBox=&#34;0 0 512 512&#34;&gt;&#xA;            &lt;path fill=&#34;currentColor&#34;&#xA;                d=&#34;M432,320H400a16,16,0,0,0-16,16V448H64V128H208a16,16,0,0,0,16-16V80a16,16,0,0,0-16-16H48A48,48,0,0,0,0,112V464a48,48,0,0,0,48,48H400a48,48,0,0,0,48-48V336A16,16,0,0,0,432,320ZM488,0h-128c-21.37,0-32.05,25.91-17,41l35.73,35.73L135,320.37a24,24,0,0,0,0,34L157.67,377a24,24,0,0,0,34,0L435.28,133.32,471,169c15,15,41,4.5,41-17V24A24,24,0,0,0,488,0Z&#34;&gt;&#xA;            &lt;/path&gt;&#xA;        &lt;/svg&gt;&#xA;    &lt;/span&gt;&#xA;        &#xA;    &#xA;&lt;/a&gt; is a pretty cool &lt;em&gt;in-process&lt;/em&gt; OLAP analytical database that I started to spin up on the fly for quick data analysis. What SQLite is to &lt;em&gt;Postgres&lt;/em&gt;, DuckDB is to &lt;em&gt;Snowflake&lt;/em&gt;. It is a single executable without dependencies and stores databases in local files.&lt;/p&gt;&#xA;&lt;p&gt;I can think of four use cases for data science work:&lt;/p&gt;&#xA;&lt;ol&gt;&#xA;&lt;li&gt;DuckDB supports larger-than-memory workloads by loading data sequentially. You can use it to analysis datasets that are too large for Pandas (and too small to justify PySpark).&lt;/li&gt;&#xA;&lt;li&gt;I can query CSV, parquet, and JSON files directly from remote endpoints, e.g. S3, using SQL.&lt;/li&gt;&#xA;&lt;li&gt;I can replace Snowflake queries with DuckDB queries in unit / integration tests.&lt;/li&gt;&#xA;&lt;li&gt;I can set up a &#xA;&lt;a href=&#34;https://duckdb.org/2022/10/12/modern-data-stack-in-a-box.html&#34; &#xA;&#xA;    target=&#34;_blank&#34;&#xA;    &gt;(DuckDB + dbt)&#xA;    &#xA;        &lt;span style=&#34;white-space: nowrap&#34;&gt;&amp;thinsp;&lt;svg&#xA;            style=&#34;height: 0.7em; width: 0.7em; padding-left: -0.2em;&#34; focusable=&#34;false&#34; data-prefix=&#34;fas&#34; data-icon=&#34;external-link-alt&#34;&#xA;            class=&#34;svg-inline--fa fa-external-link-alt fa-w-16&#34; role=&#34;img&#34; xmlns=&#34;http://www.w3.org/2000/svg&#34;&#xA;            viewBox=&#34;0 0 512 512&#34;&gt;&#xA;            &lt;path fill=&#34;currentColor&#34;&#xA;                d=&#34;M432,320H400a16,16,0,0,0-16,16V448H64V128H208a16,16,0,0,0,16-16V80a16,16,0,0,0-16-16H48A48,48,0,0,0,0,112V464a48,48,0,0,0,48,48H400a48,48,0,0,0,48-48V336A16,16,0,0,0,432,320ZM488,0h-128c-21.37,0-32.05,25.91-17,41l35.73,35.73L135,320.37a24,24,0,0,0,0,34L157.67,377a24,24,0,0,0,34,0L435.28,133.32,471,169c15,15,41,4.5,41-17V24A24,24,0,0,0,488,0Z&#34;&gt;&#xA;            &lt;/path&gt;&#xA;        &lt;/svg&gt;&#xA;    &lt;/span&gt;&#xA;        &#xA;    &#xA;&lt;/a&gt; data warehouse for local development.&lt;/li&gt;&#xA;&lt;/ol&gt;&#xA;&lt;p&gt;I want to share here my workflow for the second use case. Inspecting parquet files in AWS S3 is a pain because I can&amp;rsquo;t easily inspect them in the AWS console. Since a few months I use DuckDB to load, inspect, and analyse parquet files from the command line. I found this reduced my cognitive load in situations where I quickly want to check a remote file, because I don&amp;rsquo;t have to download the parquet file and write a python script to inspect it.&lt;/p&gt;</description>
    </item>
    <item>
      <title>TIL how to generate a dbt staging model description from snowflake table</title>
      <link>/til/generate-dbt-staging-model/</link>
      <pubDate>Wed, 04 Oct 2023 00:00:00 +0000</pubDate>
      <guid>/til/generate-dbt-staging-model/</guid>
      <description>&lt;p&gt;In my team we use &#xA;&lt;a href=&#34;https://www.getdbt.com/product/what-is-dbt&#34; &#xA;&#xA;    target=&#34;_blank&#34;&#xA;    &gt;dbt&#xA;    &#xA;        &lt;span style=&#34;white-space: nowrap&#34;&gt;&amp;thinsp;&lt;svg&#xA;            style=&#34;height: 0.7em; width: 0.7em; padding-left: -0.2em;&#34; focusable=&#34;false&#34; data-prefix=&#34;fas&#34; data-icon=&#34;external-link-alt&#34;&#xA;            class=&#34;svg-inline--fa fa-external-link-alt fa-w-16&#34; role=&#34;img&#34; xmlns=&#34;http://www.w3.org/2000/svg&#34;&#xA;            viewBox=&#34;0 0 512 512&#34;&gt;&#xA;            &lt;path fill=&#34;currentColor&#34;&#xA;                d=&#34;M432,320H400a16,16,0,0,0-16,16V448H64V128H208a16,16,0,0,0,16-16V80a16,16,0,0,0-16-16H48A48,48,0,0,0,0,112V464a48,48,0,0,0,48,48H400a48,48,0,0,0,48-48V336A16,16,0,0,0,432,320ZM488,0h-128c-21.37,0-32.05,25.91-17,41l35.73,35.73L135,320.37a24,24,0,0,0,0,34L157.67,377a24,24,0,0,0,34,0L435.28,133.32,471,169c15,15,41,4.5,41-17V24A24,24,0,0,0,488,0Z&#34;&gt;&#xA;            &lt;/path&gt;&#xA;        &lt;/svg&gt;&#xA;    &lt;/span&gt;&#xA;        &#xA;    &#xA;&lt;/a&gt; to create, document, and test data models in our data warehouse, &#xA;&lt;a href=&#34;https://www.snowflake.com/en/&#34; &#xA;&#xA;    target=&#34;_blank&#34;&#xA;    &gt;snowflake&#xA;    &#xA;        &lt;span style=&#34;white-space: nowrap&#34;&gt;&amp;thinsp;&lt;svg&#xA;            style=&#34;height: 0.7em; width: 0.7em; padding-left: -0.2em;&#34; focusable=&#34;false&#34; data-prefix=&#34;fas&#34; data-icon=&#34;external-link-alt&#34;&#xA;            class=&#34;svg-inline--fa fa-external-link-alt fa-w-16&#34; role=&#34;img&#34; xmlns=&#34;http://www.w3.org/2000/svg&#34;&#xA;            viewBox=&#34;0 0 512 512&#34;&gt;&#xA;            &lt;path fill=&#34;currentColor&#34;&#xA;                d=&#34;M432,320H400a16,16,0,0,0-16,16V448H64V128H208a16,16,0,0,0,16-16V80a16,16,0,0,0-16-16H48A48,48,0,0,0,0,112V464a48,48,0,0,0,48,48H400a48,48,0,0,0,48-48V336A16,16,0,0,0,432,320ZM488,0h-128c-21.37,0-32.05,25.91-17,41l35.73,35.73L135,320.37a24,24,0,0,0,0,34L157.67,377a24,24,0,0,0,34,0L435.28,133.32,471,169c15,15,41,4.5,41-17V24A24,24,0,0,0,488,0Z&#34;&gt;&#xA;            &lt;/path&gt;&#xA;        &lt;/svg&gt;&#xA;    &lt;/span&gt;&#xA;        &#xA;    &#xA;&lt;/a&gt;. We use different layers of models that start from the raw data and in each layer increase complexity and specialization for the target use case:&lt;/p&gt;</description>
    </item>
    <item>
      <title>TIL how to use snowflake&#39;s VALUES sub-clause</title>
      <link>/til/sql-values-subclause/</link>
      <pubDate>Tue, 03 Oct 2023 00:00:00 +0000</pubDate>
      <guid>/til/sql-values-subclause/</guid>
      <description>&lt;p&gt;Today I found out about the useful &#xA;&lt;a href=&#34;https://docs.snowflake.com/en/sql-reference/constructs/values&#34; &#xA;&#xA;    target=&#34;_blank&#34;&#xA;    &gt;VALUES()&#xA;    &#xA;        &lt;span style=&#34;white-space: nowrap&#34;&gt;&amp;thinsp;&lt;svg&#xA;            style=&#34;height: 0.7em; width: 0.7em; padding-left: -0.2em;&#34; focusable=&#34;false&#34; data-prefix=&#34;fas&#34; data-icon=&#34;external-link-alt&#34;&#xA;            class=&#34;svg-inline--fa fa-external-link-alt fa-w-16&#34; role=&#34;img&#34; xmlns=&#34;http://www.w3.org/2000/svg&#34;&#xA;            viewBox=&#34;0 0 512 512&#34;&gt;&#xA;            &lt;path fill=&#34;currentColor&#34;&#xA;                d=&#34;M432,320H400a16,16,0,0,0-16,16V448H64V128H208a16,16,0,0,0,16-16V80a16,16,0,0,0-16-16H48A48,48,0,0,0,0,112V464a48,48,0,0,0,48,48H400a48,48,0,0,0,48-48V336A16,16,0,0,0,432,320ZM488,0h-128c-21.37,0-32.05,25.91-17,41l35.73,35.73L135,320.37a24,24,0,0,0,0,34L157.67,377a24,24,0,0,0,34,0L435.28,133.32,471,169c15,15,41,4.5,41-17V24A24,24,0,0,0,488,0Z&#34;&gt;&#xA;            &lt;/path&gt;&#xA;        &lt;/svg&gt;&#xA;    &lt;/span&gt;&#xA;        &#xA;    &#xA;&lt;/a&gt; sub-clause in snowflake that allows you to dynamically generate a fixed, known set of rows inside your query.&lt;/p&gt;&#xA; &lt;h3 id=&#34;problem&#34;&gt;&#xA;  &lt;a href=&#34;#problem&#34; class=&#34;header-link&#34;&gt;&#xA;    Problem&#xA;  &lt;/a&gt;&#xA;&lt;/h3&gt;&lt;p&gt;I had a table &lt;code&gt;exchange_rates&lt;/code&gt; of EUR-currency pairs with their corresponding exchange rates&lt;/p&gt;&#xA;&lt;table&gt;&#xA;  &lt;thead&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;th&gt;date&lt;/th&gt;&#xA;          &lt;th&gt;base_currency&lt;/th&gt;&#xA;          &lt;th&gt;target_currency&lt;/th&gt;&#xA;          &lt;th&gt;exchange_rate&lt;/th&gt;&#xA;      &lt;/tr&gt;&#xA;  &lt;/thead&gt;&#xA;  &lt;tbody&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;2023-10-03&lt;/td&gt;&#xA;          &lt;td&gt;&amp;lsquo;EUR&amp;rsquo;&lt;/td&gt;&#xA;          &lt;td&gt;&amp;lsquo;GBP&amp;rsquo;&lt;/td&gt;&#xA;          &lt;td&gt;0.87&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;2023-10-03&lt;/td&gt;&#xA;          &lt;td&gt;&amp;lsquo;EUR&amp;rsquo;&lt;/td&gt;&#xA;          &lt;td&gt;&amp;lsquo;USD&amp;rsquo;&lt;/td&gt;&#xA;          &lt;td&gt;1.05&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;      &lt;tr&gt;&#xA;          &lt;td&gt;2023-10-03&lt;/td&gt;&#xA;          &lt;td&gt;&amp;lsquo;EUR&amp;rsquo;&lt;/td&gt;&#xA;          &lt;td&gt;&amp;lsquo;JPY&amp;rsquo;&lt;/td&gt;&#xA;          &lt;td&gt;155.95&lt;/td&gt;&#xA;      &lt;/tr&gt;&#xA;  &lt;/tbody&gt;&#xA;&lt;/table&gt;&#xA;&lt;p&gt;that I wanted to join with a table &lt;code&gt;transactions&lt;/code&gt; which contained transaction prices in different currencies:&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
