One world | Projects, maps and coding (c) Chulapa Jekyll Theme RSS generator https://dieghernan.github.io/ Personal blog of dieghernan en-US (c) 2026, dieghernan Sat, 14 Mar 2026 21:01:57 +0100 Sat, 14 Mar 2026 21:01:57 +0100 blog 60 One world | Projects, maps and coding https://dieghernan.github.io/assets/img/site/banner.png https://dieghernan.github.io/ Personal blog of dieghernan Introducing geobounds 3 min.

If you’ve ever worked with spatial data in R, this may ring a bell…

  • Search for boundary data
  • Figure out which version is “official”
  • Download a shapefile
  • Unzip it
  • Load it
  • Fix projections
  • Repeat

While searching for new data sources, I found the excellent geoBoundaries database. However, accessing the data can be tedious since it’s provided as zipped shapefiles, and as any GIS professional knows, shapefiles should die!

Previously, the rgeoboundaries package was on CRAN and allowed to access the geoBoundaries API, but it was archived. So I decided to create my own version, and geobounds was born.

It connects directly to the excellent geoBoundaries database and returns clean, ready-to-use sf objects with a single function call. No manual downloads. No shapefile messing.

This is how it works.

Installation

geobounds was recently accepted on CRAN, so just install it with:

install.packages("geobounds")

Load the package and other complementary packages:

library(geobounds)
library(sf)
library(ggplot2)
library(dplyr)

Getting administrative levels (ADM)

Administrative level 0 (ADM0) corresponds to countries:

# Panama

gb_get_adm0(country = "Panama") |>
  ggplot() +
  geom_sf(fill = "#072357") +
  labs(caption = "Source: www.geoboundaries.org")

You can also retrieve multiple administrative levels at once. For example:

# Simplified files
gb_get(country = "Panama", adm_lvl = "all", simplified = TRUE) |>
  ggplot() +
  geom_sf(aes(fill = shapeType), color = "grey50", linewidth = 0.1) +
  facet_wrap(vars(shapeType)) +
  scale_fill_viridis_d() +
  labs(
    title = "Administrative levels of Panama",
    fill = "level",
    caption = "Source: www.geoboundaries.org"
  )

Global Composite Boundaries (CGAZ)

When you download individual country files, each country reflects its own view of borders. This results in:

  • Overlapping boundaries
  • Geographic gaps
  • Disputed territories

For clean global visualizations, geoBoundaries provides a Composite Global Administrative Zones (CGAZ) dataset that can be accessed with gb_get_world().

Here’s an example with country-level files:

# Using individual (gb_get_adm) shapefiles
gb_get_adm0(country = c("India", "Pakistan")) |>
  # Disputed area: Kashmir
  ggplot() +
  geom_sf(aes(fill = shapeName), alpha = 0.5) +
  scale_fill_manual(values = c("#FF671F", "#00401A")) +
  labs(
    fill = "Country",
    title = "Map of India & Pakistan",
    subtitle = "Note overlapping in Kashmir region",
    caption = "Source: www.geoboundaries.org"
  )

And here’s the same comparison using CGAZ with gb_get_world():

gb_get_world(c("India", "Pakistan")) |>
  ggplot() +
  geom_sf(aes(fill = shapeName), alpha = 0.5) +
  scale_fill_manual(values = c("#FF671F", "#00401A")) +
  labs(
    fill = "Country",
    title = "Map of India & Pakistan",
    subtitle = "CGAZ does not overlap",
    caption = "Source: www.geoboundaries.org"
  )

Understanding the data

The geoBoundaries database undergoes rigorous quality assurance, including manual review and hand-digitization of physical maps. This ensures the highest level of spatial accuracy for scientific and academic research.

This precision comes at a cost: some files can be large and take longer to download. For visualization and general mapping, we recommend using simplified datasets by setting simplified = TRUE.

# Different resolutions
norway <- gb_get_adm0("NOR") |>
  mutate(res = "Full resolution")
print(object.size(norway), units = "Mb")
#> 26.5 Mb

norway_simp <- gb_get_adm0(country = "NOR", simplified = TRUE) |>
  mutate(res = "Simplified")
print(object.size(norway_simp), units = "Mb")
#> 1.5 Mb

norway_all <- bind_rows(norway, norway_simp)

# Plot ggplot2
ggplot(norway_all) +
  geom_sf(fill = "#BA0C2F", color = "#00205B") +
  facet_wrap(vars(res)) +
  theme_minimal() +
  labs(caption = "Source: www.geoboundaries.org")

Caching

Downloaded files are cached locally. That means:

  • You download once
  • Re-running your script is fast
  • Your workflow stays reproducible

You can set the cache directory with:

gb_set_cache_dir("a/path/to/a/folder")

When should you use geobounds?

Use geobounds when:

  • You need reliable global administrative boundaries
  • You want reproducible workflows
  • You prefer code over manual downloads
  • You’re building maps, dashboards, or spatial analyses

geobounds is not alone in this space. Depending on your needs, you might also want to look at:

rnaturalearth

A very popular package to access Natural Earth datasets directly from R. It’s lightweight and great for quick global maps, especially at small scales.

If you need physical layers (rivers, coastlines, elevation) alongside political boundaries, this is often a good choice.

giscoR

If your focus is Europe, giscoR provides direct access to Eurostat GISCO data. It’s particularly useful for NUTS regions and European statistical boundaries.

osmdata

When administrative boundaries are not enough and you need OpenStreetMap features (roads, POIs, land use, etc.), osmdata gives you powerful querying capabilities.

Bottom line

I built geobounds to provide direct access to geoBoundaries products. I hope this package would help you in your GIS joruney.

Happy mapping!

]]>
https://dieghernan.github.io/202602_geobounds/ posts ggplot2mapsr_bloggersrspatialrstatssf https://dieghernan.github.io/202602_geobounds/ Thu, 12 Feb 2026 00:00:00 +0100
Mapping Antarctica Cool maps from the South Pole

6 min.

Creating maps with R is usually straightforward, but representations that cross the International Date Line or that use polar projections can be tricky.

Different spatial-data providers use different conventions: some break geometries at certain longitudes (for example, cutting the Chukchi Peninsula), while others omit portions of the data. These inconsistencies can produce awkward artifacts near the poles.

In this post I fix the GISCO (European Commission) shapefile for Antarctica and produce clean orthographic maps. I walk through the manual corrections and then create a few example maps.

# Libraries
library(tidyverse)
library(sf)
library(giscoR)
library(ggrepel)
library(rmapshaper)

Fixing the geometry

First, we obtain the GISCO Antarctica polygon and transform it to an orthographic projection centered on the South Pole.

antarct <- gisco_get_countries(year = 2024, resolution = 1, country = "ATA") %>%
  select(NAME = NAME_ENGL) |>
  # Ortho proj centered in the South Pole
  st_transform(crs = "+proj=ortho +lat_0=-90 +lon_0=0")

ggplot(antarct) +
  geom_sf(fill = "lightblue")

The shapefile contains a visible “lollipop” cut that looks unnatural in an orthographic projection. I correct it manually by:

  1. Identify the polygon that represents the main Antarctic landmass.
  2. Convert that polygon to a sequence of coordinates (points).
  3. Remove the small sequence of points that create the artifact.
  4. Rebuild the polygon from the cleaned coordinates and replace the broken geometry with the corrected one.

We convert polygons to point coordinates and inspect them to find the offending sequence:

# Identify the max
ant_explode <- antarct |>
  st_cast("POLYGON")

nrow(ant_explode)
#> [1] 778

# Max polygon

ant_max <- ant_explode[which.max(st_area(ant_explode)), ]

coords <- st_coordinates(ant_max) |>
  as_tibble() |>
  # Add id for points
  mutate(np = row_number())


ggplot(coords, aes(X, Y)) +
  geom_point(size = 0.05, color = "darkblue") +
  geom_text(aes(label = np), check_overlap = TRUE) +
  coord_equal()

From the plotted indices, we can see the problematic points fall roughly in the range 8200–9200. We inspect that interval in detail to select the exact indices to remove.

test <- coords |>
  filter(np %in% seq(8200, 9200))

test |>
  ggplot(aes(X, Y)) +
  geom_point(size = 0.05, color = "darkblue") +
  geom_text(aes(label = np), check_overlap = TRUE)

Note: This cleaning is tailored to this specific shapefile and may need to be repeated for other shapefiles. The approach is straightforward but depends on the particular geometry and projection.
# Final solution after some iterations...

test |>
  filter(np %in% seq(8289, 9130)) |>
  ggplot(aes(X, Y)) +
  geom_point(color = "darkblue") +
  labs(title = "To remove")


test |>
  filter(!np %in% seq(8289, 9130)) |>
  ggplot(aes(X, Y)) +
  geom_point(color = "darkblue") +
  labs(title = "To keep")
After removing the offending points, we rebuild the polygon and reconstitute the full Antarctica shape from the corrected piece plus the remaining polygons.
# From coordinates to polygon
newpol <- coords |>
  as.data.frame() |>
  filter(!np %in% seq(8289, 9130)) |> # Removing offending points
  select(X, Y) |>
  as.matrix() |>
  list() |>
  st_polygon() |>
  st_sfc() |>
  st_set_crs(st_crs(ant_max))

ant_max_fixed <- st_sf(st_drop_geometry(ant_max), geometry = newpol)

# Regenerate initial shape
antarctica_fixed <- bind_rows(
  ant_max_fixed,
  ant_explode[-which.max(st_area(ant_explode)), ]
) |>
  group_by(NAME) |>
  summarise(m = 1) |>
  select(-m) |>
  st_make_valid()

antarctica_fixed
#> Simple feature collection with 1 feature and 1 field
#> Geometry type: MULTIPOLYGON
#> Dimension:     XY
#> Bounding box:  xmin: -2583099 ymin: -2458296 xmax: 2690846 ymax: 2233395
#> Projected CRS: +proj=ortho +lat_0=-90 +lon_0=0
#> # A tibble: 1 × 2
#>   NAME                                                                     geometry
#> * <chr>                                                          <MULTIPOLYGON [m]>
#> 1 Antarctica (((-2456385 1179033, -2456141 1178965, -2456464 1178341, -2456563 117…

ggplot(antarctica_fixed) +
  geom_sf(fill = "lightblue")

Plotting examples

With the corrected shape we can produce maps. Below are a few examples based on proposed Antarctic flag designs.

Graham Bartram’s proposal (1996)

A simple rendition of Bartram’s original concept:

bbox <- st_bbox(antarctica_fixed) # For limits on the panel

antarctica_fixed |>
  ggplot() +
  geom_sf(fill = "white", color = NA) +
  theme(
    panel.background = element_rect(fill = "#009fdc"),
    panel.grid = element_blank(),
    axis.text = element_blank(),
    axis.ticks = element_blank()
  ) +
  labs(title = "Graham Bartram's proposal") +
  coord_sf(
    xlim = c(bbox[c(1, 3)]) * 1.8,
    ylim = c(bbox[c(2, 4)]) * 1.4
  )

Emblem of the Antarctic Treaty

This example uses graticules to create a concentric “bullseye” pattern around Antarctica. Generating such graticules and merging meridians requires a few extra steps to avoid small gaps near the pole.

# Need graticules
grats <- giscoR::gisco_get_countries() |>
  st_transform(st_crs(antarctica_fixed)) |>
  # Specify the cuts of the graticules
  st_graticule(
    lat = c(-80, -70, -60),
    lon = seq(-180, 180, 30),
    ndiscr = 10000,
    margin = 0.000001
  )


ggplot(grats) +
  geom_sf(color = "darkblue")

We merge meridians so the area around the South Pole is filled. st_graticule() can leave a tiny hole at the pole; we fix this by joining complementary meridians.

# Merge meridians
merid <- lapply(seq(-180, 0, 30), function(x) {
  df <- grats |>
    filter(type == "E") |>
    filter(degree %in% c(x, x + 180))

  df2 <- df |>
    st_geometry() |>
    st_cast("MULTIPOINT") |>
    st_union() |>
    st_cast("LINESTRING")

  sf_x <- st_sf(
    degree = x,
    type = "E",
    geometry = df2
  )
}) |> bind_rows()


grats_end <- merid |>
  bind_rows(grats |>
    filter(type != "E"))

We then cut and color the resulting graticules so they form the emblem-like pattern.

# Cut since some grats should be colored differently

antarctica_simp <- rmapshaper::ms_simplify(antarctica_fixed, keep = 0.005)
grats_yes <- st_intersection(grats_end, antarctica_simp)
grats_no <- st_difference(grats_end, antarctica_simp)

antarctica_simp |>
  ggplot() +
  geom_sf(fill = "white", color = NA) +
  theme(
    panel.background = element_rect(fill = "#072b5f"),
    panel.grid = element_blank(),
    axis.text = element_blank(),
    axis.ticks = element_blank()
  ) +
  geom_sf(data = grats_yes, color = "#072b5f", linewidth = 1) +
  geom_sf(data = grats_no, color = "white", linewidth = 1) +
  coord_sf(
    xlim = c(bbox[c(1, 3)]) * 1.8,
    ylim = c(bbox[c(2, 4)]) * 1.4
  ) +
  labs(title = "Emblem of the Antarctic Treaty")

Antarctica Flag Redesigned

In 2024, Graham Bartram revealed a new version of his original flag as part of a global campaign to raise awareness about the growing problem of microplastic pollution. The new design keeps the familiar white outline of Antarctica but swaps the plain blue background for one filled with countless tiny, colorful dots. These dots represent the microscopic bits of plastic that have been discovered even in the planet’s most untouched places - including the Antarctic ice and its surrounding oceans.

Because the design relies on randomness, we approximate it using the following procedure:

  1. Sample random points across the Antarctic polygon.
  2. Build Voronoi polygons from those points, then apply a small negative buffer to create gaps.
  3. Randomly sample the resulting polygons to increase visual noise.
  4. Color polygons so larger areas remain white while smaller polygons use magenta/pink tones.
# Maximum chunk of Antarctica, the one that we fixed

ant_max_fixed
#> Simple feature collection with 1 feature and 1 field
#> Geometry type: POLYGON
#> Dimension:     XY
#> Bounding box:  xmin: -2447764 ymin: -2125910 xmax: 2690846 ymax: 2233395
#> Projected CRS: +proj=ortho +lat_0=-90 +lon_0=0
#>         NAME                       geometry
#> 1 Antarctica POLYGON ((-2423737 1557908,...

set.seed(2024)
# Sample, Voronoi and negative buffer
plastics <- st_sample(ant_max_fixed, 3000) |>
  st_union() |>
  st_voronoi(envelope = st_geometry(ant_max_fixed)) |>
  st_collection_extract() |>
  st_buffer(dist = -10000)


# Keep only those properly included in the outline

toinc <- st_contains_properly(ant_max_fixed, plastics, sparse = FALSE) |>
  as.vector()

# Select random chunks
plastic_end <- plastics[toinc, ] |>
  st_as_sf() |>
  slice_sample(prop = 0.75)

ggplot(plastic_end) +
  geom_sf(fill = "darkblue")



# Random coloring

plastic_end$area <- st_area(plastic_end) |> as.double()

plastic_end$fill <- sample(c("#ff00ec", "#9e00ec"), nrow(plastic_end), replace = TRUE)
plastic_end$fill <- ifelse(plastic_end$area > quantile(plastic_end$area, probs = 0.4),
  "white",
  plastic_end$fill
)

bbox2 <- st_bbox(plastic_end)
ggplot() +
  geom_sf(data = plastic_end, aes(fill = fill), color = NA) +
  scale_fill_identity() +
  theme(
    panel.background = element_rect(fill = "#009fdc"),
    panel.grid = element_blank(),
    axis.text = element_blank(),
    axis.ticks = element_blank()
  ) +
  labs(title = "New redesign") +
  coord_sf(
    xlim = c(bbox2[c(1, 3)] * 1.8),
    ylim = c(bbox2[c(2, 4)]) * 1.4
  )

]]>
https://dieghernan.github.io/202510_mapping-antarctica/ posts beautiful_mapsggplot2giscoRmapsr_bloggersrspatialrstatssf https://dieghernan.github.io/202510_mapping-antarctica/ Sat, 25 Oct 2025 00:00:00 +0200
Implementing group_by count in Liquid See the magic happening

5 min.

Liquid is an open-source template language created by Shopify back in 2006 and written in Ruby. It is widely used by several frameworks, with Jekyll being one of the most famous.

This website is created using Jekyll, specifically my Jekyll template Chulapa (link).

Some time ago, @cargocultprogramming opened dieghernan/chulapa#29 because one of the components of the theme was broken in Jekyll =>4.1.0. Digging a bit, I saw jekyll/jekyll#8214, exposing the same issue. What seemed to be a feature was indeed a bug that some developers were exploiting.

The change is that when applying the group_by Liquid filter on an array, it used to produce a “grouped” version of the array, while on Jekyll =>4.1.0 it produces a different result that can’t be used in the same way.



{% assign alldocs = site.exercises %}
{% assign grouptag = alldocs | map: 'tags' | join: ',' | split: ',' | group_by: tag %}

{{ grouptag }}

<!-- Jekyll < 4.1.0 result -->
{"name"=>"tag A", "items"=>["tag A"], "size"=>1}{"name"=>"Tag B", "items"=>["Tag B"], "size"=>1}{"name"=>"Virtualbox", "items"=>["Virtualbox"], "size"=>1}{"name"=>"netcat", "items"=>["netcat"], "size"=>1}{"name"=>"whois", "items"=>["whois"], "size"=>1}{"name"=>"dig", "items"=>["dig"], "size"=>1} ... {"name"=>"Hydra", "items"=>["Hydra"], "size"=>1}

<!-- Jekyll >= 4.1.0 result -->
{"name"=>"", "items"=>["tag A", "Tag B", "Virtualbox", "netcat", "whois", "dig", ... , "Hydra"], "size"=>26}


So basically, counting items was not easy anymore. I developed a solution in pure Liquid (which happens to be a quite verbose language out of the predefined filters) that is compatible with any Jekyll version.

The algorithm is now implemented in Chulapa. You can check the results on my /tags page.

Note that the tables produced in the example are taken from my live site, hence they may change as I add more posts. The results of the table should be the same as the order and number of tags displayed on the /tags page.

Alternative group_by with Liquid

First, we define an array of all the tags included in the documents of my site:


{% assign alldocs = site.documents %}
{% assign alltags = alldocs | map: 'tags' | join: ',' | split: ',' %}

Cool! Now we can count the number of unique elements in alltags by counting the occurrences of unique tags in the array:

<!-- Allocating array to group_by: replacement -->

<!-- Unique values -->
{% assign single_tags = alltags | uniq %}

<!-- Arrays to populate -->
{% assign count_tags = '' | split: ',' %}

<!-- Iterator 0 to number of unique tags - 1 (size = number of unique tags) -->
{% assign n_tags = single_tags | size | minus: 1 %}

{% for i in (0..n_tags) %}
<!-- Populate -->
  {% assign count_this_tag = alltags | where_exp:"item", "item == single_tags[i]" | size %}
  {% assign count_tags = count_tags | push: count_this_tag %}
{% endfor %}

<!-- Display single_tags and count_tags as a table -->
<table>
  <caption>Display count of tags on this site </caption>
  <tr>
    <th>Tag</th>
    <th>Count</th>
  </tr>
  {% for i in (0..n_tags) %}
    <tr>
      <td>{{ single_tags[i] }}</td>
      <td>{{ count_tags[i] }}</td>
    </tr>
  {% endfor %}
</table>

See results
Display count of tags on this site
Tag Count
r_bloggers 22
rstats 23
rspatial 23
sf 14
maps 27
vignette 1
rnaturalearth 3
function 4
leaflet 5
jekyll 2
html 2
beautiful_maps 8
giscoR 10
raster 1
flags 4
mapSpain 3
Wikipedia 1
cartography 4
svg 1
inset 3
r_package 5
classInt 1
terra 6
rasterpic 1
ggplot2 10
tmap 1
mapsf 1
discontinued 8
project 11
R 5
python 2
guest-author 1
COVID19 3
ggridges 1
tidyterra 4
maptiles 1
s2 1
astronomy 2
celestial 2
geojson 2
gpkg 2
resmush 1
liquid 1
chulapa 1
pebble 4
watchface 4
javascript 4
C 4
webscrapping 1
dataset 2
csv 2
json 1
twitter 1

Sorting

How to rank the tags by the number of occurrences? We can set the maximum number of occurrences and loop in reverse order. The ranked array would be populated if a tag presents the number of occurrences in the main loop:

<!-- Used in https://github.com/mmistakes/minimal-mistakes/blob/master/_includes/posts-taxonomy.html -->

{% assign items_max = count_tags | sort | last %}
{% assign sorted_tags = '' | split: ',' %}
{% assign sorted_count_tags = '' | split: ',' %}

{% for i in (1..items_max) reversed %}
  {% for j in (0..n_tags) %}
    {% if count_tags[j] == i %}
     {% assign sorted_tags = sorted_tags | push: single_tags[j] %}
     {% assign sorted_count_tags = sorted_count_tags | push: i %}
    {% endif %}
  {% endfor %}
{% endfor %}

{% assign sorted_tags = sorted_tags | uniq %}

<table>
  <caption>Display sorted count of tags on this site </caption>
  <tr>
    <th>Tag</th>
    <th>Count (desc sorted)</th>
  </tr>
  {%- for i in (0..n_tags) %}
    <tr>
      <td>{{ sorted_tags[i] }}</td>
      <td>{{ sorted_count_tags[i] }}</td>
    </tr>
  {%- endfor -%}
</table>

See results
Display sorted count of tags on this site
Tag Count (desc sorted)
maps 27
rstats 23
rspatial 23
r_bloggers 22
sf 14
project 11
giscoR 10
ggplot2 10
beautiful_maps 8
discontinued 8
terra 6
leaflet 5
r_package 5
R 5
function 4
flags 4
cartography 4
tidyterra 4
pebble 4
watchface 4
javascript 4
C 4
rnaturalearth 3
mapSpain 3
inset 3
COVID19 3
jekyll 2
html 2
python 2
astronomy 2
celestial 2
geojson 2
gpkg 2
dataset 2
csv 2
vignette 1
raster 1
Wikipedia 1
svg 1
classInt 1
rasterpic 1
tmap 1
mapsf 1
guest-author 1
ggridges 1
maptiles 1
s2 1
resmush 1
liquid 1
chulapa 1
webscrapping 1
json 1
twitter 1

Bottom line

Done! Here you have a clean version of the algorithm:


{% assign alldocs = site.documents %}
{% assign alltags = alldocs | map: 'tags' | join: ',' | split: ',' %}
{% assign single_tags = alltags | uniq %}

<!-- Counting -->
{% assign count_tags = '' | split: ',' %}
{% assign n_tags = single_tags | size | minus: 1 %}
{% for i in (0..n_tags) %}
  {% assign count_this_tag = alltags | where_exp:"item", "item == single_tags[i]" | size %}
  {% assign count_tags = count_tags | push: count_this_tag %}
{% endfor %}

<!-- Extra: sort -->
{% assign items_max = count_tags | sort | last %}
{% assign sorted_tags = '' | split: ',' %}
{% assign sorted_count_tags = '' | split: ',' %}

{% for i in (1..items_max) reversed %}
  {% for j in (0..n_tags) %}
    {% if count_tags[j] == i %}
     {% assign sorted_tags = sorted_tags | push: single_tags[j] %}
     {% assign sorted_count_tags = sorted_count_tags | push: i %}
    {% endif %}
  {% endfor %}
{% endfor %}

{% assign sorted_tags = sorted_tags | uniq %}

]]>
https://dieghernan.github.io/202502_liquid-group-by/ posts chulapafunctionhtmljekyllliquidgroup_by count in Liquid ]]> https://dieghernan.github.io/202502_liquid-group-by/ Fri, 28 Feb 2025 00:00:00 +0100
Happy Valentine’s Day Do you know the Bonne projection?

1 min.

Do you know the Bonne Projection? This is a very special one, as used on whole world’s mapping produces this result. Happy Valentine’s Day!

library(sf)
library(dplyr)
library(giscoR)
library(ggplot2)
library(showtext)


world <- gisco_get_countries()

# Shaped ocean

h_earth <- world %>%
  # Fine grid
  st_make_grid(n = c(50, 50)) %>%
  # Bonne projection
  st_transform("ESRI:54024") %>%
  # To sf object (data-frame like)
  st_sf(couple = 2, someval = 1, .)


# And finally

font_add_google(name = "Emilys Candy", family = "emilys")

showtext_auto()

ggplot() +
  geom_sf(data = h_earth, fill = "#f9b7bb", color = "#f9b7bb") +
  geom_sf(data = world, fill = "#d24658", color = "#d24658") +
  theme_void() +
  labs(
    title = "Happy Valentine's Day",
    caption = "Bonne Projection (ESRI:54024)"
  ) +
  theme(
    plot.background = element_rect(
      fill = "#faddcf",
      color = "transparent"
    ),
    text = element_text(family = "emilys", colour = "#d24658"),
    plot.title = element_text(hjust = 0.5, size = rel(3)),
    plot.caption = element_text(hjust = 0.5, size = rel(2))
  )

Happy Valentine's Day

]]>
https://dieghernan.github.io/202502_st_valentine/ posts beautiful_mapsggplot2giscoRmapsr_bloggersrspatialrstatssf https://dieghernan.github.io/202502_st_valentine/ Mon, 17 Feb 2025 00:00:00 +0100
Optimize your images with R and reSmush.it Introducing the resmush package

5 min.

The resmush package has recently been accepted on CRAN! This small utility package allows for the optimization (i.e., compression) of local and online images using reSmush.it.

You can install resmush from CRAN with:

install.packages("resmush")

What is reSmush.it?

reSmush.it is a free online API that provides image optimization and has been implemented on WordPress, Drupal, or Magento. Some features of reSmush.it include:

  • Free optimization services, no API key required.
  • Optimization of local and online images.
  • Supported image file formats: PNG, JPG/JPEG, GIF, BMP, TIFF, WebP.
  • Maximum image size: 5 MB.
  • Compression using several algorithms:
    • PNGQuant: Strip unneeded chunks from PNGs while preserving full alpha transparency.
    • JPEGOptim: Lossless optimization based on optimizing the Huffman tables.
    • OptiPNG: png reducer used by several online optimizers.

reSmush.it is free of charge, but its team is planning to offer more serves as well as extend the service to support other types of image files. If you enjoy this API (as I do), you can consider supporting them.

Why the resmush package?

One of the main reasons I developed resmush is because I usually include precomputed vignettes with my packages (see tidyterra as an example). I found that the plots created on CRAN with the standard configuration (i.e., not precomputed vignettes but built on CRAN itself) were not very satisfying. In some of the packages I developed, especially those related to mapping, they didn’t do justice to the actual results when a user runs them.

Precomputing vignettes has the drawback of producing higher-quality images at the expense of size. To avoid exceeding CRAN’s 5Mb maximum size policy, I developed resmush, which enables me to reduce the size of the images without a significant loss in quality.

Another use case for resmush is optimizing images in the context of web page development and SEO optimization. For example, I optimized all the images on this blog using resmush_dir(), which is a shorthand for optimizing all files in a specific folder.

There are other alternatives that I would discuss at the end of this post, but in one line, the reSmush.it API performs fast with minimal configuration for a wide range of formats without needing an API key.

Using the resmush package

With local files

Let’s present an example of how a local file can be optimized. First we create a large plot with ggplot2

library(tidyterra)
library(ggplot2)
library(terra)
library(maptiles)

cyl <- vect(system.file("extdata/cyl.gpkg", package = "tidyterra")) %>%
  project("EPSG:3857")
cyl_img <- get_tiles(cyl, "Esri.WorldImagery", zoom = 8, crop = TRUE)
cyl_gg <- autoplot(cyl_img, maxcell = Inf) +
  geom_spatvector(data = cyl, alpha = 0.3)

cyl_gg
Original file

Original file


# And we save it for resmushing
ggsave("cyl.png", width = 5, height = 0.7 * 5)

Cool, but the file has a size of 1.7 Mb. So we can use resmush_file() to reduce it, see:

library(resmush)
resmush_file("cyl.png")
#> ══ resmush summary ══════════════════════════════════════════
#> ℹ Input: 1 file with size 1.7 Mb
#> ✔ Success for 1 file: Size now is 762.2 Kb (was 1.7 Mb). Saved 948.9 Kb (55.46%).
#> See result in directory '.'.

# Check
png::readPNG("cyl_resmush.png") %>%
  grid::grid.raster()
Optimized file

Optimized file

By default, resmush_file() and resmush_dir() do not overwrite the original file, although this behavior may be modified with the overwrite = TRUE parameter. Now, the resmushed file ("cyl_resmush.png") has a size of 762.2 Kb.

Let’s compare the results side-by-side:

Original online figure Optimized figure

Original picture (left/top) 1.7 Mb and optimized picture (right/bottom) 762.2 Kb (Compression 55.46%). Click in the images to enlarge.

We can verify that the image has been compressed without reducing its dimensions.

size_src <- file.size("cyl.png") %>%
  `class<-`("object_size") %>%
  format(units = "auto")
size_dest <- file.size("cyl_resmush.png") %>%
  `class<-`("object_size") %>%
  format(units = "auto")

dim_src <- dim(png::readPNG("cyl.png"))[1:2] %>% paste0(collapse = "x")
dim_dest <- dim(png::readPNG("cyl_resmush.png"))[1:2] %>% paste0(collapse = "x")

data.frame(
  source = c("original file", "compressed file"),
  size = c(size_src, size_dest),
  dimensions = c(dim_src, dim_dest)
) %>%
  knitr::kable()
source size dimensions
original file 1.7 Mb 1050x1500
compressed file 762.2 Kb 1050x1500

With online files

We can also optimize online files with resmush_url() and download them to disk. In this example, I demonstrate a feature of all the functions in resmush: they return an invisible data frame with a summary of the process.

url <- "https://dieghernan.github.io/assets/img/samples/sample_1.3mb.jpg"

# Invisible data frame
dm <- resmush_url(url, "sample_optimized.jpg", report = FALSE)
knitr::kable(dm)
src_img dest_img src_size dest_size compress_ratio notes src_bytes dest_bytes
https://dieghernan.github.io/assets/img/samples/sample_1.3mb.jpg sample_optimized.jpg 1.3 Mb 985 Kb 26.63% OK 1374693 1008593
Original online figure Optimized online figure

Original picture (left/top) 1.3 Mb and optimized picture (right/bottom) 985 Kb (Compression 26.63%). Click in the images to enlarge.

Other alternatives

There are other alternatives for optimizing images with R, but first…

Yihui Xie, one of the most prominent figures in the R community, was recently laid off from his position at Posit PBC (formerly RStudio) (more info here).

Yihui is the developer of knitr, markdown, blogdown, and bookdown, among others, and he has been one of the key contributors (if not the most) to the reproducible research space with R through his libraries.

If you have ever used and enjoyed his packages, consider sponsoring him on GitHub.

  • One of the many packages developed by Yihui is xfun, which includes the following functions for optimizing image files:
    • xfun::tinify() is similar to resmush_file() but uses TinyPNG. An API key is required.
    • xfun::optipng() compresses local files with OptiPNG (which needs to be installed locally).
  • tinieR package by jmablog. An R package that provides a full interface with TinyPNG.
  • optout package by @coolbutuseless. Similar to xfun::optipng() with additional options. Requires additional software to be installed locally.
Table 1: R packages: Comparison of alternatives for optimizing images.
tool CRAN Additional software? Online? API Key? Limits?
xfun::tinify() Yes No Yes Yes 500 files/month (Free tier)
xfun::optipng() Yes Yes No No No
tinieR No No Yes Yes 500 files/month (Free tier)
optout No Yes No No No
resmush Yes No Yes No Max size 5Mb
Table 2: R packages: Formats admitted.
tool png jpg gif bmp tiff webp pdf
xfun::tinify()
xfun::optipng()
tinieR
optout
resmush

Additionally, if you host your projects on GitHub, you can try Imgbot, which is free for open-source projects. Imgbot provides automatic optimization for files in your repositories, and the optimized files will be included in specific pull requests before merging into your work.

]]>
https://dieghernan.github.io/202402_optimize-images-r/ posts r_bloggersr_packageresmush https://dieghernan.github.io/202402_optimize-images-r/ Mon, 05 Feb 2024 00:00:00 +0100
Beautiful Maps with R (V): Point densities Bertin’s dot density maps with R and GHSL

8 min.

Recently the R Graph Gallery has incorporated a new post by Benjamin Nowak showing how to create a dot density map based on the work of the french cartographer Jacques Bertin (1918 - 2010):

Jacques Bertin, Sémiologie graphique. Les diagrammes. Les réseaux. Les cartes
(1967)

In this post I would create a similar map for Iberia, and additionally I would show how to create a variation using a hexagonal grid instead of a rectangular one. This is the first issue of the series Beautiful Maps with R.

Libraries

I would use the following libraries for loading, manipulating and plotting spatial data (both raster and vector):

# Base spatial packages
library(terra)
library(sf)
# Spatial data
library(giscoR)
# Wrangling and plotting
library(tidyverse)
library(tidyterra)
library(ggtext)
# Additional for hex grids
# Supporting for units and bridge raster to polygon
library(units)
library(exactextractr)

Get the data

The first step is always to get the data we need. My final map would use as a base a circular layout with Iberia in the middle (that would be our observation window), so we can get the corresponding shapes and create a buffer around it.

After that, we would extract the population spatial distribution from GHSL - Global Human Settlement Layer. We would use the global file with a resolution of 1 km on Mollweide projection (ESRI:54009).

# Create observation window based the Iberian Peninsula and surroundings
# We create a buffered circle
owin <- gisco_get_countries(
  country = c("ES", "PT"),
  # We buffer in 3035 since I want my final map on this projection
  epsg = 3035,
  resolution = 60
) %>%
  st_geometry() %>%
  st_union() %>%
  st_centroid(of_largest_polygon = TRUE) %>%
  st_buffer(750000) %>%
  # But for extracting raster data we need this in Mollweide so far
  st_transform(crs = "ESRI:54009")

# Get additional shapes and cut them to the owin
regions <- gisco_get_nuts(resolution = 3, nuts_level = 2) %>%
  st_transform(st_crs(owin)) %>%
  st_intersection(owin)

countries <- gisco_get_countries(resolution = 3) %>%
  st_transform(st_crs(owin)) %>%
  st_intersection(owin)

# Base map
base_gg <- ggplot() +
  geom_sf(data = regions, fill = NA, color = "black", linewidth = 0.1) +
  geom_sf(data = countries, fill = NA, linewidth = 0.5, color = "black") +
  geom_sf(data = owin, fill = NA, linewidth = 0.75, color = "black")

base_gg

plot of chunk 202312_basemap

That would be our base map. Now we download programmatically the GHSL data and we would check that everything is correct. At this point, it is interesting to use the win argument when reading the raster with terra::rast(), as this would allow us to load only the desired area with the subsequent improvement in terms of performance.

# Download data
# We need the following file (download 305Mb)
url <- "https://jeodpp.jrc.ec.europa.eu/ftp/jrc-opendata/GHSL/GHS_POP_GLOBE_R2023A/GHS_POP_E2030_GLOBE_R2023A_54009_1000/V1-0/GHS_POP_E2030_GLOBE_R2023A_54009_1000_V1_0.zip"

# This is where I would store the file, you would need to modify the folder
fname <- file.path("~/R/mapslib/GHS", basename(url))
if (!file.exists(fname)) {
  download.file(url, fname)
}
zip_content <- unzip(fname, list = TRUE)
zip_content
#>                                                 Name    Length                Date
#> 1     GHS_POP_E2030_GLOBE_R2023A_54009_1000_V1_0.tif 263188307 2023-04-28 21:11:00
#> 2 GHS_POP_E2030_GLOBE_R2023A_54009_1000_V1_0.tif.ovr  88461532 2023-04-28 21:11:00
#> 3                         GHSL_Data_Package_2023.pdf   9761851 2023-10-27 14:30:00
#> 4           GHS_POP_GLOBE_R2023A_input_metadata.xlsx    137963 2023-04-30 10:42:00

# Unzip
unzip(fname, exdir = "~/R/mapslib/GHS", junkpaths = TRUE, overwrite = FALSE)

# Create the path to the file
global_tiff <- zip_content$Name[grepl("tif$", zip_content$Name)]
global_tiff_path <- file.path("~/R/mapslib/GHS", global_tiff)

# And read cropping to our owin
pop_init <- rast(global_tiff_path, win = as_spatvector(owin))

# Consistent naming of the layer
names(pop_init) <- "population"

ncell(pop_init)
#> [1] 2255923

# And check with tidyterra
base_gg +
  geom_spatraster(data = pop_init, maxcell = 50000) +
  scale_fill_viridis_c(na.value = "transparent", alpha = 0.3)

plot of chunk 202312_base_raster-1

Data wrangling

The GHSL information contains the estimated population on each grid. However the file has a high resolution (more than 2 millions of cells) so for plotting purposes we are going to reduce (i.e. aggregate) the number of cells so we can have a better dot visualization. Once that we aggregate, we would compute the area of each new aggregated cell and compute the corresponding population density.

Instead of using the numeric range of densities, we would use categories for the final map, so we would classify the density into different groups:

# Reduce resolution for visualization
# Compute factor to reduce raster to (aprox) 100 rows:
nrow(pop_init)
#> [1] 1511
factor <- round(nrow(pop_init) / 100)

# Each new cell would contain the sum of population of the aggregated cells
pop_agg <- terra::aggregate(pop_init,
  fact = factor, fun = "sum",
  na.rm = TRUE
)


# Compute area of each new cell
pop_agg$area_km2 <- cellSize(pop_agg, unit = "km")

# Compute densities, mask to owin, convert to points and create categories
pop_points <- pop_agg %>%
  # Compute density by cell
  mutate(dens = population / area_km2) %>%
  select(dens) %>%
  # Mask to the countries shapes
  mask(as_spatvector(countries), touches = FALSE) %>%
  # SpatVector as points
  as.points() %>%
  # Categorize
  mutate(cat = case_when(
    dens < 25 ~ "A",
    dens < 50 ~ "B",
    dens < 100 ~ "C",
    dens < 250 ~ "D",
    dens < 500 ~ "E",
    dens < 1500 ~ "F",
    TRUE ~ "G"
  ))

Final plot

And finally the map. In this case I would save it as a high resolution square map.

# Final plot
final_plot <- base_gg +
  # Layer, this object is a SpatVector instead of sf object
  geom_spatvector(
    pop_points,
    mapping = aes(size = cat),
    # Use a point with a border AND a fill
    pch = 21, color = "white", fill = "black", linewidth = 0.05
  ) +
  scale_size_manual(
    values = c(0.3, 1, 1.25, 1.5, 2, 2.5, 3.5),
    labels = c(
      "< 25", "[25, 50)", "[50, 100)", "[100, 250)",
      "[250, 500)", "[500, 1500)", "≥ 1500"
    ),
    guide = guide_legend(
      nrow = 1,
      title.position = "top",
      keywidth = 1,
      label.position = "bottom"
    )
  ) +
  labs(
    title = "**Population density in Iberia**",
    subtitle = "in the way of Jacques Bertin",
    size = "<span style='color:grey'>Inhabitants per km<sup>2</sup></span>",
    caption = "**Data** GHSL **| Plot** @dhernangomez based on @BjnNowak"
  ) +
  theme_void() +
  # This CRS for representation only
  coord_sf(expand = TRUE, crs = 3035) +
  theme(
    # plot.margin = margin(20,0,20,0,"cm"),
    plot.background = element_rect(fill = "white", color = NA),
    legend.position = "bottom",
    legend.margin = margin(r = 20, unit = "pt"),
    legend.title = element_markdown(),
    legend.key.width = unit(50, "pt"),
    plot.title = element_markdown(hjust = 0.5, size = 20),
    plot.subtitle = element_text(hjust = 0.5, color = "grey40"),
    plot.caption = element_markdown(
      color = "grey20",
      margin = margin(t = 20, b = 5, unit = "pt"),
      hjust = .5
    )
  )



ggsave("202312_finalmap.png", dpi = 300, width = 8, height = 8)

plot of chunk 202312_base_raster-1

Alternative hexagonal grid

The previous plot is based on the centroids of each cell of the raster, that is, by definition, a rectangular grid. I would like also to experiment with hexagonal grids (i.e. grid of hexagons) since I have the feeling that looks more “natural” than the rectangular ones, that presents a regularity hardly seen in the wild.

The issue here is that terra does not produce this type of grids, however it is possible to create them with sf::st_make_grid(), so the workflow for this altenative is:

  1. Create a hexagonal grid, where each hexagon represents a similar area than each cell on the aggregated raster.
  2. Extract the values of the raster to the new grid.
  3. Finally, follow the same steps on data wrangling and plotting.

When working with sf::st_make_grid(square = FALSE), the parameter cellsize should be the “diameter” of the hexagon instead of the area. Luckly, we can infer this value since the area of a hexagon is

[A = \frac{\sqrt{3}}{2}d^{2}]

So we can extract \(d\) from the previous expression knowing the area \(A\) of the aggregated cells:

# Hex grid with sf ----

# Avg size of the cells on the aggregated grid
target_area <- cellSize(pop_agg, unit = "km") %>%
  pull() %>%
  mean() %>%
  as_units("km^2")

target_area
#> 224.7461 [km^2]

# Infer diam hex
diam_hex <- sqrt(2 * target_area / sqrt(3))
# Create hexagonal grid
pop_agg_sf <- st_make_grid(owin, cellsize = diam_hex, square = FALSE)

length(pop_agg_sf)
#> [1] 10340
ncell(pop_agg)
#> [1] 10100

area_km2 <- st_area(pop_agg_sf) %>%
  set_units("km^2") %>%
  as.double()

pop_agg_sf <- st_sf(area_km2 = area_km2, geom = pop_agg_sf)

Now, we use exact_extract() to extract the population on each hexagonal grid.

# Extract aggregated population by hex cell
pop_agg_sf$population <- exact_extract(pop_init,
  y = pop_agg_sf,
  progress = FALSE,
  fun = "sum"
)


base_gg +
  geom_sf(
    data = pop_agg_sf %>% filter(population > 0),
    aes(fill = population), color = NA
  ) +
  scale_fill_viridis_c(na.value = "transparent", alpha = 0.3)

plot of chunk 202312_base_raster-1

Finally we just compute densities, create categories and finally the map:

# Mask and categorize
pop_sf_points <- pop_agg_sf %>%
  # Compute density by cell
  mutate(dens = population / area_km2) %>%
  select(dens) %>%
  # To points and filter to country shape
  st_centroid(of_largest_polygon = TRUE) %>%
  st_filter(countries) %>%
  # Categorize
  mutate(cat = case_when(
    dens < 25 ~ "A",
    dens < 50 ~ "B",
    dens < 100 ~ "C",
    dens < 250 ~ "D",
    dens < 500 ~ "E",
    dens < 1500 ~ "F",
    TRUE ~ "G"
  ))

# Final plot
final_plot_hex <- base_gg +
  # Layer, this object is a sf object
  geom_sf(
    pop_sf_points,
    mapping = aes(size = cat),
    # Use a point with a border AND a fill
    pch = 21, color = "white", fill = "black", linewidth = 0.05
  ) +
  scale_size_manual(
    values = c(0.3, 1, 1.25, 1.5, 2, 2.5, 3.5),
    labels = c(
      "< 25", "[25, 50)", "[50, 100)", "[100, 250)",
      "[250, 500)", "[500, 1500)", "≥ 1500"
    ),
    guide = guide_legend(
      nrow = 1,
      title.position = "top",
      keywidth = 1,
      label.position = "bottom"
    )
  ) +
  labs(
    title = "**Population density in Iberia**",
    subtitle = "in the way of Jacques Bertin",
    size = "<span style='color:grey'>Inhabitants per km<sup>2</sup></span>",
    caption = "**Data** GHSL **| Plot** @dhernangomez based on @BjnNowak"
  ) +
  theme_void() +
  # This CRS for representation only
  coord_sf(expand = TRUE, crs = 3035) +
  theme(
    # plot.margin = margin(20,0,20,0,"cm"),
    plot.background = element_rect(fill = "white", color = NA),
    legend.position = "bottom",
    legend.margin = margin(r = 20, unit = "pt"),
    legend.title = element_markdown(),
    legend.key.width = unit(50, "pt"),
    plot.title = element_markdown(hjust = 0.5, size = 20),
    plot.subtitle = element_text(hjust = 0.5, color = "grey40"),
    plot.caption = element_markdown(
      color = "grey20",
      margin = margin(t = 20, b = 5, unit = "pt"),
      hjust = .5
    )
  )

ggsave("202312_finalmap_hex.png", dpi = 300, width = 8, height = 8)

plot of chunk 202312_finalmap_hex

And that’s it! Which one do you like the most? Let me know in the comments.

References

  • Bertin J (1967). Sémiologie graphique. Les diagrammes. Les réseaux. Les cartes. Gauthier-Villars, Paris.
  • European Commission. Joint Research Centre. (2023). GHSL data package 2023.. Publications Office, LU https://doi.org/10.2760/098587
  • Hernangómez D (2023). “Using the tidyverse with terra objects: the tidyterra package Journal of Open Source Software, 8(91), 5751. ISSN 2475-9066 https://doi.org/10.21105/joss.05751
  • Pesaresi M, Politis P (2023). “GHS-BUILT-C R2023A - GHS Settlement Characteristics, derived from Sentinel2 composite (2018) and other GHS R2023A data.” https://doi.org/10.2905/3C60DDF6-0586-4190-854B-F6AA0EDC2A30
]]>
https://dieghernan.github.io/202312_bertin_dots/ posts beautiful_mapsggplot2giscoRmapsr_bloggersrspatialrstatssfterratidyterra https://dieghernan.github.io/202312_bertin_dots/ Sat, 16 Dec 2023 00:00:00 +0100
Star Map with R Creating Star Map Visualizations Based on Location and Date

13 min.

A couple of weeks ago I was doing my daily check on StackOverflow when I found a question by Benjamin Smith that blew my mind: Creating Star Map Visualizations Based on Location and Date !!

Oh my! Can we do this with R? Answer is: Of course! In fact, Kim Fitter already worked on this two years ago, see his Celestial Maps. So I decided to put some work on this.

Since then, I also learnt that Benjamin and myself have been working on parallel on the same topic. He is preparing a R package named starBliss and hopefully this post would be of some help.

I have very little knowledge on this topic, so if you find errors or have any suggestions please let me know in the Comments section below

First things first: The data

The initial data source of all these projects (Kim, Benjamin and myself) is the same, and it is provided on the D3 plugin d3-celestial by Olaf Frohn. As Kim Fitter pointed out on his post, these data files present the problem (experienced by almost any sf user) of lines crossing the international date line (longitude 180º). I also found that some files are not valid as per sf::st_make_valid().

Solution? I processed and fixed almost every file (some of them as the corresponding to the Milky Way or the lines for Chinese constellations manually) to provide a set of files. That is the origin of my project Celestial Data, that provides all these files on several spatial formats. Please check out the repo to know more about it.

Creating a Star Map with R

The first step is loading a bunch of libraries that would help us on this cosmic task:

# Spatial manipulation
library(sf)
library(s2)
library(nominatimlite)

## Wrange data and dates
library(dplyr)
library(lubridate)
library(lutz)

## Visualization
library(ggplot2)
library(ggfx)
library(ggshadow)

Helper funs

Now we prepare some helper functions:

  • load_celestial() just downloads the corresponding .geojson from the Celestial Data repo1 to a specific directory cachedir and loads it with sf::st_read().

  • pretty_lonlat() is a labeller that returns a decimal longitude or latitude coordinate in the format degrees/minutes/seconds (e.g a latitude 34.72782 would be converted into 34° 43’ 40.15” N).

Show load_celestial and pretty_lonlat()
load_celestial <- function(filename,
                           url = "https://cdn.jsdelivr.net/gh/dieghernan/celestial_data@main/data/",
                           cachedir = tempdir()) {
  if (!dir.exists(cachedir)) {
    stop(
      "Please create ",
      path.expand(cachedir),
      " directory",
      "first"
    )
  }

  url <- file.path(url, filename)
  local_path <- file.path(cachedir, filename)


  if (!file.exists(local_path)) {
    download.file(url, local_path, mode = "wb", quiet = TRUE)
  }

  celestial <- sf::st_read(local_path, quiet = TRUE)

  return(celestial)
}

pretty_lonlat <- function(x, type, accuracy = 2) {
  positive <- x >= 0

  # Decompose
  x <- abs(x)
  D <- as.integer(x)
  m <- (x - D) * 60
  M <- as.integer(m)
  S <- round((m - M) * 60, accuracy)

  # Get label
  if (type == "lon") {
    lab <- ifelse(positive > 0, "E", "W")
  } else {
    lab <- ifelse(positive > 0, "N", "S")
  }


  # Compose
  label <- paste0(D, "\u00b0 ", M, "' ", S, '\" ', lab)
  return(label)
}

Additionally, you may notice that on this d3-celestial demo there is some degree of rotation depending on the location and the time. I found how this is done on the d3-celestial plugin and I found the function getMST(dt, lng), that I ported to R (get_mst()). As per some of the research that I did this function computes the Mean Sidereal Time (MST) given a specific longitude (maybe then is more accurate Local Sidereal Time? Just wondering) expressed in degrees, following the formulas provided by Meeus (1998). If you want to know more on this I recommend this post by James Still.

So basically the input is a POSIXct date time and a given longitude and the result is an alternative longitude that we would use to adjust the projection of our Star Map. This would provide the rotation observed on d3-celestial plugin.

Show get_mst()
# Derive rotation degrees of the projection given a date and a longitude
get_mst <- function(dt, lng) {
  desired_date_utc <- lubridate::with_tz(dt, "UTC")


  yr <- lubridate::year(desired_date_utc)
  mo <- lubridate::month(desired_date_utc)
  dy <- lubridate::day(desired_date_utc)
  h <- lubridate::hour(desired_date_utc)
  m <- lubridate::minute(desired_date_utc)
  s <- lubridate::second(desired_date_utc)

  if ((mo == 1) || (mo == 2)) {
    yr <- yr - 1
    mo <- mo + 12
  }

  # Adjust times before Gregorian Calendar
  # See https://squarewidget.com/julian-day/
  if (lubridate::as_date(dt) > as.Date("1582-10-14")) {
    a <- floor(yr / 100)
    b <- 2 - a + floor(a / 4)
  } else {
    b <- 0
  }
  c <- floor(365.25 * yr)
  d <- floor(30.6001 * (mo + 1))

  # days since J2000.0
  jd <- b + c + d - 730550.5 + dy + (h + m / 60 + s / 3600) / 24
  jt <- jd / 36525

  # Rotation
  mst <- 280.46061837 + 360.98564736629 * jd +
    0.000387933 * jt^2 - jt^3 / 38710000.0 + lng

  # Modulo 360 degrees
  mst <- mst %% 360

  return(mst)
}

The final result would have an spherical outline. That means that we would need to perform an spherical cut. Did you know that in r-spatial the Earth is no longer flat? Thanks to s2 we can overcome this issue. Additionally, we would get rid of artifacts derived from the changes on the projection. This also includes some refinements to avoid empty/non-valid geometries as well as GEOMETRYCOLLECTION handling. The function sf_spherical_cut() would do that for us.

Show sf_spherical_cut()
# Cut a sf object with a buffer using spherical s2 geoms
# Optionally, project and flip

sf_spherical_cut <- function(x, the_buff, the_crs = sf::st_crs(x), flip = NULL) {
  # Get geometry type
  geomtype <- unique(gsub("MULTI", "", sf::st_geometry_type(x)))[1]

  # Keep the data frame, s2 drops it
  the_df <- sf::st_drop_geometry(x)
  the_geom <- sf::st_geometry(x)
  # Convert to s2 if needed
  if (!inherits(the_buff, "s2_geography")) {
    the_buff <- sf::st_as_s2(the_buff)
  }

  the_cut <- the_geom %>%
    # Cut with s2
    sf::st_as_s2() %>%
    s2::s2_intersection(the_buff) %>%
    # Back to sf and add the df
    sf::st_as_sfc() %>%
    sf::st_sf(the_df, geometry = .) %>%
    dplyr::filter(!sf::st_is_empty(.)) %>%
    sf::st_transform(crs = the_crs)

  # If it is not POINT filter by valid and non-empty
  # This if for performance
  if (!geomtype == "POINT") {
    # If any is GEOMETRYCOLLECTION extract the right value
    if (any(sf::st_geometry_type(the_cut) == "GEOMETRYCOLLECTION")) {
      the_cut <- the_cut %>%
        sf::st_collection_extract(type = geomtype, warn = FALSE)
    }

    the_cut <- the_cut %>%
      dplyr::filter(!is.na(sf::st_is_valid(.)))
  }

  if (!is.null(flip)) {
    the_cut <- the_cut %>%
      dplyr::mutate(geometry = geometry * flip) %>%
      sf::st_set_crs(the_crs)
  }

  return(the_cut)
}

Inputs

Now we are ready to start creating our visualization. We need only two inputs:

  • A desired location, that we would geocode with nominatimlite.

  • A specific moment of time.

# Inputs
desired_place <- "Madrid, Spain"

# We are not using yet the timezone
desired_date <- make_datetime(
  year = 2015,
  month = 9,
  day = 22,
  hour = 3,
  min = 45
)

# Geocode place with nominatimlite
desired_place_geo <- geo_lite(desired_place, full_results = TRUE)

desired_place_geo %>%
  select(address, lat, lon)
#> # A tibble: 1 × 3
#>   address                                                                                    lat   lon
#>   <chr>                                                                                    <dbl> <dbl>
#> 1 Madrid, Área metropolitana de Madrid y Corredor del Henares, Comunidad de Madrid, España  40.4 -3.70

# And get the coordinates
desired_loc <- desired_place_geo %>%
  select(lat, lon) %>%
  unlist()

desired_loc
#>       lat       lon 
#> 40.416705 -3.703582

With respect to our object desired_date, it is quite relevant for accurate plotting to specify the correct time zone. Since we already now the latitude and longitude of our desired location, we can easily get that with the lutz package:

desired_date
#> [1] "2015-09-22 03:45:00 UTC"

# Get tz
get_tz <- tz_lookup_coords(desired_loc[1], desired_loc[2], warn = FALSE)

get_tz
#> [1] "Europe/Madrid"

# Force it to be local time
desired_date_tz <- force_tz(desired_date, get_tz)

desired_date_tz
#> [1] "2015-09-22 03:45:00 CEST"

About time zones

Some online shops that creates this kind of maps (I won’t post links) includes this script:

... 

'selectedHour': '22',
'selectedMinute': '00',

...

This means that those shops are really creating the map at YYYY-MM-DD 22:00:00 UTC. If you want to exactly replicate that (even though that night sky is not accurate, think that in New Zealand the local time at that moment would be 10:00 hence no stars are visible) you would need to adjust desired_date_tz as:

as_datetime(paste(as.Date(desired_date_tz), "22:00:00"), tz = "UTC")
#> [1] "2015-09-22 22:00:00 UTC"

# That would really correspond to 10:00
as_datetime(paste(as.Date(desired_date_tz), "22:00:00"), tz = "UTC") %>%
  with_tz("Pacific/Auckland")
#> [1] "2015-09-23 10:00:00 NZST"

Setup

Now we can start creating our buffers and projections, that would help us to crop the celestial data objects.

I noticed also that the location demo of d3-celestial.js uses Airy projection, so we are going to replicate that as well:

# Get the rotation and prepare buffer and projection

# Get right degrees
lon_prj <- get_mst(desired_date_tz, desired_loc[2])
lat_prj <- desired_loc[1]

c(lon_prj, lat_prj)
#>      lon      lat 
#> 23.15892 40.41670

# Create proj4string w/ Airy projection

target_crs <- paste0("+proj=airy +x_0=0 +y_0=0 +lon_0=", lon_prj, " +lat_0=", lat_prj)


target_crs
#> [1] "+proj=airy +x_0=0 +y_0=0 +lon_0=23.1589164999314 +lat_0=40.4167047"

# We need to flip celestial objects to get the impression of see from the Earth
# to the sky, instead of from the sky to the Earth
# https://stackoverflow.com/a/75064359/7877917
# Flip matrix for affine transformation

flip_matrix <- matrix(c(-1, 0, 0, 1), 2, 2)


# And create an s2 buffer of the visible hemisphere at the given location
hemisphere_s2 <- s2_buffer_cells(
  as_s2_geography(
    paste0("POINT(", lon_prj, " ", lat_prj, ")")
  ),
  9800000,
  max_cells = 5000
)

# This one is for plotting
hemisphere_sf <- hemisphere_s2 %>%
  st_as_sf() %>%
  st_transform(crs = target_crs) %>%
  st_make_valid()

Celestial Data

Now, we can load the data of our choice. In this case I have selected to represent the Milky Way, Constellation Lines and Stars.

We also add some additional variables that would help us to improve the visualization.

mw <- load_celestial("mw.min.geojson")

# Add colors to MW to use on fill
cols <- colorRampPalette(c("white", "yellow"))(5)
mw$fill <- factor(cols, levels = cols)

ggplot(mw) +
  geom_sf(aes(fill = fill)) +
  scale_fill_identity()

plot of chunk 20230125_mw


# And process it

# Cut to buffer
mw_end <- sf_spherical_cut(mw,
  the_buff = hemisphere_s2,
  # Change the crs
  the_crs = target_crs,
  flip = flip_matrix
)


ggplot(mw_end) +
  geom_sf(aes(fill = fill)) +
  scale_fill_identity()

plot of chunk 20230125_mw

Now it is the turn of the constellations:

const <- load_celestial("constellations.lines.min.geojson")

ggplot(const) +
  geom_sf() +
  coord_sf(expand = FALSE)

plot of chunk 20230125_const


# Cut to buffer

const_end <- sf_spherical_cut(const,
  the_buff = hemisphere_s2,
  # Change the crs
  the_crs = target_crs,
  flip = flip_matrix
)


ggplot(const_end) +
  geom_sf() +
  coord_sf(expand = FALSE)

plot of chunk 20230125_const

And finally the stars:

stars <- load_celestial("stars.6.min.geojson")

ggplot(stars) +
  # We use relative brightness (br) as aes
  geom_sf(aes(size = br, alpha = br), shape = 16) +
  scale_size_continuous(range = c(0.5, 6)) +
  scale_alpha_continuous(range = c(0.1, 0.8)) +
  coord_sf(expand = FALSE)

plot of chunk 20230125_stars


# Cut to buffer

stars_end <- sf_spherical_cut(stars,
  the_buff = hemisphere_s2,
  # Change the crs
  the_crs = target_crs,
  flip = flip_matrix
)

ggplot(stars_end) +
  # We use relative brightness (br) as aes
  geom_sf(aes(size = br, alpha = br), shape = 16) +
  scale_size_continuous(range = c(0.5, 6)) +
  scale_alpha_continuous(range = c(0.1, 0.8))

plot of chunk 20230125_stars

Graticules

We are going also to include graticules, so the Earth poles can be quickly spotted. In this case we don’t apply any affine transformation, so the flip parameter of sf_spherical_cut() needs to be set as NULL.

grat <- st_graticule(
  ndiscr = 5000,
  lat = seq(-90, 90, 10),
  lon = seq(-180, 180, 30)
)

ggplot(grat) +
  geom_sf() +
  coord_sf(expand = FALSE)

plot of chunk 20230125_grat


# Cut to buffer, we dont flip this one (it is not an object of the space)
grat_end <- sf_spherical_cut(
  x = grat,
  the_buff = hemisphere_s2,
  # Change the crs
  the_crs = target_crs
)


ggplot(grat_end) +
  geom_sf() +
  coord_sf(expand = FALSE)

plot of chunk 20230125_grat

Visualization with ggplot2

We are almost set! For preparing the final map, first we are going to create the corresponding labels, that would be included as caption on the ggplot2 map:

lat_lab <- pretty_lonlat(desired_loc[1], type = "lat")
lon_lab <- pretty_lonlat(desired_loc[2], type = "lon")

pretty_labs <- paste(lat_lab, "/", lon_lab)

cat(pretty_labs)
#> 40° 25' 0.14" N / 3° 42' 12.9" W

# Create final caption to put on bottom

pretty_time <- paste(
  # Pretty Day
  scales::label_date(
    format = "%d %b %Y",
    locale = "en"
  )(desired_date_tz),
  # Pretty Hour
  format(desired_date_tz, format = "%H:%M", usetz = TRUE)
)

cat(pretty_time)
#> 22 Sep 2015 03:45 CEST

# Our final caption
caption <- toupper(paste0(
  "Star Map\n",
  desired_place, "\n",
  pretty_time, "\n",
  pretty_labs
))


cat(caption)
#> STAR MAP
#> MADRID, SPAIN
#> 22 SEP 2015 03:45 CEST
#> 40° 25' 0.14" N / 3° 42' 12.9" W

We can enhance the visualization by applying some interesting effects:

  • We want the Milky Way to appear a bit blurry instead of as a Well-Known geometry. With this effect we can mimic how we really see it from the Earth. So we can use ggfx::with_blur() to get this effect.

  • We can also add a glowing effect to our stars and constellations (have a look to Dominic Royé’s post Firefly Cartography to know more about this). The only drawback is that I was not able to use ggshadow with LINESTRING (Dominic shows how to do it with POINT), so instead I converted my lines (the constellations) to coordinates and applied ggshadow::geom_glowpath()

So we are ready now to create the final visualization:

# Prepare MULTILINESTRING

const_end_lines <- const_end %>%
  st_cast("MULTILINESTRING") %>%
  st_coordinates() %>%
  as.data.frame()


ggplot() +
  # Graticules
  geom_sf(data = grat_end, color = "grey60", linewidth = 0.25, alpha = 0.3) +
  # A blurry Milky Way
  with_blur(
    geom_sf(
      data = mw_end, aes(fill = fill), alpha = 0.1, color = NA,
      show.legend = FALSE
    ),
    sigma = 8
  ) +
  scale_fill_identity() +
  # Glowing stars
  geom_glowpoint(
    data = stars_end, aes(
      alpha = br, size =
        br, geometry = geometry
    ),
    color = "white", show.legend = FALSE, stat = "sf_coordinates"
  ) +
  scale_size_continuous(range = c(0.05, 0.75)) +
  scale_alpha_continuous(range = c(0.1, 0.5)) +
  # Glowing constellations
  geom_glowpath(
    data = const_end_lines, aes(X, Y, group = interaction(L1, L2)),
    color = "white", size = 0.5, alpha = 0.8, shadowsize = 0.4, shadowalpha = 0.01,
    shadowcolor = "white", linejoin = "round", lineend = "round"
  ) +
  # Border of the sphere
  geom_sf(data = hemisphere_sf, fill = NA, color = "white", linewidth = 1.25) +
  # Caption
  labs(caption = caption) +
  # And end with theming
  theme_void() +
  theme(
    text = element_text(colour = "white"),
    panel.border = element_blank(),
    plot.background = element_rect(fill = "#191d29", color = "#191d29"),
    plot.margin = margin(20, 20, 20, 20),
    plot.caption = element_text(
      hjust = 0.5, face = "bold",
      size = rel(1),
      lineheight = rel(1.2),
      margin = margin(t = 40, b = 20)
    )
  )

plot of chunk 20230125_celestial_map

Voilà! I checked several times the results with the results provided by d3-celestial.js on the location demo and the underlying calculations on Javascript and everything seems to be up and running.

Extra: Chinese constellations

Celestial Data also provides data for traditional Chinese constellations, so we can create a similar map with this whole different set of geometries:

const_cn <- load_celestial("constellations.lines.cn.min.geojson")

# Cut and prepare for geom_glowpath() on a single step
const_cn_end_lines <- sf_spherical_cut(const_cn,
  the_buff = hemisphere_s2,
  # Change the crs
  the_crs = target_crs,
  flip = flip_matrix
) %>%
  # To paths
  st_cast("MULTILINESTRING") %>%
  st_coordinates() %>%
  as.data.frame()


ggplot() +
  # Graticules
  geom_sf(data = grat_end, color = "grey60", linewidth = 0.25, alpha = 0.3) +
  # A blurry Milky Way
  with_blur(
    geom_sf(
      data = mw_end, aes(fill = fill), alpha = 0.1, color = NA,
      show.legend = FALSE
    ),
    sigma = 8
  ) +
  scale_fill_identity() +
  # Glowing stars
  geom_glowpoint(
    data = stars_end, aes(
      alpha = br, size =
        br, geometry = geometry
    ),
    color = "white", show.legend = FALSE, stat = "sf_coordinates"
  ) +
  scale_size_continuous(range = c(0.05, 0.75)) +
  scale_alpha_continuous(range = c(0.1, 0.5)) +
  # Glowing constellations
  geom_glowpath(
    data = const_cn_end_lines, aes(X, Y, group = interaction(L1, L2)),
    color = "white", size = 0.5, alpha = 0.8, shadowsize = 0.4, shadowalpha = 0.01,
    shadowcolor = "white", linejoin = "round", lineend = "round"
  ) +
  # Border of the sphere
  geom_sf(data = hemisphere_sf, fill = NA, color = "white", linewidth = 1.25) +
  # Caption
  labs(caption = caption) +
  # And end with theming
  theme_void() +
  theme(
    text = element_text(colour = "white"),
    panel.border = element_blank(),
    plot.background = element_rect(fill = "#191d29", color = "#191d29"),
    plot.margin = margin(20, 20, 20, 20),
    plot.caption = element_text(
      hjust = 0.5, face = "bold",
      size = rel(1),
      lineheight = rel(1.2),
      margin = margin(t = 40, b = 20)
    )
  )

plot of chunk 20230125_celestial_map_cn

References

Meeus J (1998). Astronomical algorithms, 2nd edition. Willmann-Bell, Richmond, Va. ISBN 9780943396613.

Frohn O, Hernangómez D (2023). “Celestial Data.” doi:10.5281/zenodo.7561601, https://dieghernan.github.io/celestial_data/.

Frohn O (2015). “d3-celestial.” https://github.com/ofrohn/d3-celestial/.

Fitter K (2019). “Celestial Maps.” (link).

Still J (2020). “Astronomical Calculations: Sidereal Time.” https://squarewidget.com/astronomical-calculations-sidereal-time/.

Pebesma E, Dunnington D (2020). “In r-spatial, the Earth is no longer flat.” (link).

Royé D (2020). “Firefly Cartography.” (link).

  1. In fact, the download is performed via the jsDelivr, that distribute files hosted on GitHub via CDN. This is supposed to improve performance but in any case the underlying data source is the GitHub repo. 

]]>
https://dieghernan.github.io/202301_star-map-R/ posts astronomycelestialgeojsonggplot2gpkgmapsr_bloggersrspatialrstatss2sf https://dieghernan.github.io/202301_star-map-R/ Wed, 25 Jan 2023 00:00:00 +0100
Celestial Data A compilation of celestial data files

1 min.

DOI

This project provides several datasets in GeoJSON and GeoPackage format of celestial objects as of J2000 epoch.

The original files were provided on the d3-celestial plugin (Frohn 2015) under BSD 3-Clause. The datasets produced on this project consists on the same data provided on d3-celestial plugin processed with the R package sf (Pebesma 2018) to ensure its validity:

  • The spatial data objects are bounded to \([-180, -90, 180, 90]\).

  • Date is provided on WGS84 - World Geodetic System 1984 (EPSG:4326).

  • All geometries valid as per ST_IsValid (GEOS 3.9.3).

Distribution

The data can be accessed from several API endpoints:


<!-- From GitHub -->
https://raw.githubusercontent.com/dieghernan/celestial_data/main/data/mw.min.geojson

<!-- From the Website -->
https://dieghernan.github.io/celestial_data/data/mw.min.geojson

<!-- From jsDelivr -->
https://cdn.jsdelivr.net/gh/dieghernan/celestial_data@main/data/mw.min.geojson

Data

Data is provided in GeoJSON (*.geojson) and GeoPackage (*.gpkg) format. Additionally, for GeoJSON formats a minified version (*.min.geojson) is also provided.

The data source can be found on the corresponding GitHub repo.

List of files provided
  • asterisms.geojson
  • asterisms.gpkg
  • asterisms.min.geojson
  • constellations.borders.cn.geojson
  • constellations.borders.cn.gpkg
  • constellations.borders.cn.min.geojson
  • constellations.borders.geojson
  • constellations.borders.gpkg
  • constellations.borders.min.geojson
  • constellations.borders.min.min.geojson
  • constellations.bounds.cn.geojson
  • constellations.bounds.cn.gpkg
  • constellations.bounds.cn.min.geojson
  • constellations.bounds.geojson
  • constellations.bounds.gpkg
  • constellations.bounds.min.geojson
  • constellations.cn.csv
  • constellations.cn.geojson
  • constellations.cn.gpkg
  • constellations.cn.min.geojson
  • constellations.csv
  • constellations.geojson
  • constellations.gpkg
  • constellations.lines.cn.geojson
  • constellations.lines.cn.gpkg
  • constellations.lines.cn.min.geojson
  • constellations.lines.geojson
  • constellations.lines.gpkg
  • constellations.lines.min.geojson
  • constellations.min.geojson
  • dsonames.cn.csv
  • dsonames.csv
  • dsos.14.geojson
  • dsos.14.gpkg
  • dsos.14.min.geojson
  • dsos.20.geojson
  • dsos.20.gpkg
  • dsos.20.min.geojson
  • dsos.6.geojson
  • dsos.6.gpkg
  • dsos.6.min.geojson
  • dsos.bright.geojson
  • dsos.bright.gpkg
  • dsos.bright.min.geojson
  • lg.geojson
  • lg.gpkg
  • lg.min.geojson
  • messier.geojson
  • messier.gpkg
  • messier.min.geojson
  • mw.geojson
  • mw.gpkg
  • mw.min.geojson
  • starnames.cn.csv
  • starnames.csv
  • stars.14.geojson
  • stars.14.gpkg
  • stars.14.min.geojson
  • stars.6.geojson
  • stars.6.gpkg
  • stars.6.min.geojson
  • stars.8.geojson
  • stars.8.gpkg
  • stars.8.min.geojson

See additional details on Data Description.

Citation

Please cite these datasets as:

Frohn, O., & Hernangómez, D. (2023). Celestial Data [Data set]. https://doi.org/10.5281/zenodo.7561601

A BibTeX entry:


@misc{frohnhernangomez:2023,
    title        = {Celestial Data},
    author       = {Olaf Frohn and Diego Hernangómez},
    year         = 2023,
    doi          = {10.5281/zenodo.7561601},
    url          = {https://dieghernan.github.io/celestial_data/},
}

References

Frohn, Olaf. 2015. “d3-celestial” https://github.com/ofrohn/d3-celestial/.

Pebesma, Edzer. 2018. “Simple Features for R: Standardized Support for Spatial Vector Data.” The R Journal 10 (1): 439446. https://doi.org/10.32614/RJ-2018-009.

]]>
https://dieghernan.github.io/projects/celestial-data/ projects astronomycelestialcsvdatasetgeojsongpkgmapsproject https://dieghernan.github.io/projects/celestial-data/ Mon, 23 Jan 2023 00:00:00 +0100
Hillshade, colors and marginal plots with tidyterra (II) The rain in Spain does not stay mainly in the plain

13 min.

This is the second post of the series “Hillshade, colors and marginal plots with tidyterra”. In this post I would explore an approach for annotating marginal plots to a ggplot2 map of a SpatRaster, including information of the values by longitude and latitude. See the first post of the series here.

If you love watching classic movies, specially from the Hollywood’s Golden Age, you may recognize the following lyrics:

The rain in Spain stays mainly in the plain!

By George, she’s got it! By George, she’s got it!

Now, once again where does it rain? On the plain! On the plain!

And where’s that soggy plain? In Spain! In Spain!

The rain in Spain stays mainly in the plain!

The rain in Spain stays mainly in the plain!

This hard statement is made on My Fair Lady (1964) by Audrey Hepburn, Rex Harrison and Stanley Holloway. But as a Spaniard I can tell it is completely false.

The rain in Spain stays mainly in the north, most notably in Galicia. And I can prove it!

On this post I would overlay a SpatRaster showing average precipitation data with an extra set of plots on the margin to identify where the rain in Spain stays (mainly).

Libraries

On this post we would use the following libraries:

## Libraries

# Data manipulation
library(terra)
library(tidyterra)
library(dplyr)

# Get the data
library(geodata)
library(mapSpain)

# Plotting
library(ggplot2)
library(scales)
library(cowplot)
library(colorspace)

The plain in Spain

Well, the plain (or as we name it La Meseta Central) covers a large area of the inner land of Spain, with an average altitude of 650 meters over the sea level.

I didn’t find any accurate spatial data file with the bounds of the plain, so for this case I would approximate it using a mixture of political borders (historically the Meseta is associated to Castile and Madrid) and elevation data to get a rough shape.


# Using mapSpain
the_plain <- esp_get_prov(
  c(
    "Madrid", "Castilla-La Mancha",
    "Extremadura", "Castilla y Leon",
    "Teruel"
  ),
  epsg = 4326, resolution = 1
) %>%
  mutate(the_plain = TRUE) %>%
  group_by(the_plain) %>%
  # Combine
  summarise() %>%
  # To terra, until this step was sf
  vect()

# Get altitude

# I use here a local directory to cache downloaded files on my PC.
# Modify this to your likes, e.g. using
# mydir <- tempdir()

mydir <- "~/R/mapslib/misc"


r_init <- elevation_30s("ESP", path = mydir)

# For better handling we set here the names
names(r_init) <- "alt"

# We don't want values lower than 0 on the raster
r <- r_init %>%
  mutate(alt = pmax(0, alt))


# Now intersect the raster and the vector and filter by range

exploded <- r %>%
  crop(the_plain, mask = TRUE) %>%
  # Let's define here a range of elevations
  filter(alt > 600 & alt < 1100) %>%
  drop_na() %>%
  as.polygons(dissolve = TRUE, na.rm = TRUE) %>%
  # Aggregate first
  aggregate() %>%
  # Explode vectors
  disagg() %>%
  # And fill holes
  fillHoles()



# Select biggest polygons (area bigger than 50 kms 2)
r_plain <- exploded %>%
  # Add area
  mutate(area = expanse(exploded)) %>%
  filter(area > 50000**2) %>%
  # And convert to lines
  as.lines()

autoplot(r_plain)

plot of chunk 20221212_plan_alt

We can create a now plot similar to the one produced in the previous post to identify the plain. In first place I create a base layer with a representation of the hillshade, that we would reuse later:


# Creating hillshade

slope <- terrain(r, "slope", unit = "radians")
aspect <- terrain(r, "aspect", unit = "radians")
hill <- shade(slope, aspect, 30, 45)

# normalize names
names(hill) <- "shades"

# Hillshading palette
pal_greys <- hcl.colors(1000, "Grays")

# Index of color by cell
index <- hill %>%
  mutate(index_col = rescale(shades, to = c(1, length(pal_greys)))) %>%
  mutate(index_col = round(index_col)) %>%
  pull(index_col)


# Get cols
vector_cols <- pal_greys[index]

# Need to avoid resampling
# and dont use aes

# Base hill plot
hill_plot <- ggplot() +
  geom_spatraster(
    data = hill, fill = vector_cols, maxcell = Inf,
    alpha = 1
  )

hill_plot

plot of chunk 20221212_hill

And finally we overlay the altitude and the outline of the plain in Spain.

# Overlaying and theming

# Aware of limits of the raster

alt_limits <- minmax(r) %>% as.vector()
# Round to lower and higher 500 integer with a min of 0
alt_limits <- pmax(
  c(floor(alt_limits[1] / 500), ceiling(alt_limits[2] / 500)) * 500,
  0
)

alt_limits
#> [1]    0 3500


base_text_size <- 9


plot_esp <- hill_plot +
  geom_spatraster(data = r, maxcell = Inf) +
  # Overlay the_plain
  geom_spatvector(
    data = r_plain,
    color = alpha("black", 0.7),
    linewidth = 0.15
  ) +
  scale_fill_hypso_tint_c(
    palette = "wiki-schwarzwald-cont",
    limits = alt_limits,
    alpha = 0.4,
    breaks = seq(0, 3500, 250),
    labels = label_comma()
  ) +
  guides(fill = guide_legend(
    title = "   m.",
    title.position = "top",
    keywidth = .5,
    reverse = TRUE,
    override.aes = list(alpha = 0.8)
  )) +
  labs(
    title = "Elevation of Spain",
    subtitle = "The plain represented with black line"
  ) +
  theme_minimal(base_family = "serif") +
  theme(
    plot.background = element_rect(fill = "white", color = "white"),
    plot.title = element_text(
      face = "bold", size = base_text_size * 1.5,
      hjust = 0.5
    ),
    plot.subtitle = element_text(
      size = base_text_size * 0.9,
      hjust = 0.5
    ),
    plot.caption = element_text(
      margin = margin(t = base_text_size * 3),
      face = "italic"
    ),
    legend.key = element_rect("grey50"),
    legend.text = element_text(hjust = 0),
    legend.position = "left"
  )

plot_esp

plot of chunk 20221212_hill_overlay

The rain in Spain

Let’s check now wheter the rain falls mainly in the plain or not. We use here geodata::worldclim_country() to get the average precipitation by month from WordClim:

# Precipitation of Spain

# Get precip data
precip <- geodata::worldclim_country("ESP", "prec", mydir)

precip
#> class       : SpatRaster 
#> dimensions  : 1980, 2760, 12  (nrow, ncol, nlyr)
#> resolution  : 0.008333333, 0.008333333  (x, y)
#> extent      : -18.5, 4.5, 27.5, 44  (xmin, xmax, ymin, ymax)
#> coord. ref. : lon/lat WGS 84 (EPSG:4326) 
#> source      : ESP_wc2.1_30s_prec.tif 
#> names       : ESP_w~rec_1, ESP_w~rec_2, ESP_w~rec_3, ESP_w~rec_4, ESP_w~rec_5, ESP_w~rec_6, ... 
#> min values  :           0,           1,           1,           0,           0,           0, ... 
#> max values  :         296,         255,         199,         166,         181,         140, ...

We have now a SpatRaster with 12 layers representing the value of each month. So we now just add the values by cell to get the annual average. Note that we also need to normalize the SpatRaster to the projection, extent and resolution of our hill object:

# Sum all layers
precip_avg <- sum(precip)

precip_avg
#> class       : SpatRaster 
#> dimensions  : 1980, 2760, 1  (nrow, ncol, nlyr)
#> resolution  : 0.008333333, 0.008333333  (x, y)
#> extent      : -18.5, 4.5, 27.5, 44  (xmin, xmax, ymin, ymax)
#> coord. ref. : lon/lat WGS 84 (EPSG:4326) 
#> source(s)   : memory
#> name        :  sum 
#> min value   :    8 
#> max value   : 2055


compare_spatrasters(precip_avg, hill)

# Align raster using hill

precip_avg_mask <- precip_avg %>%
  project(hill) %>%
  crop(hill) %>%
  mask(hill)

# Normalize
names(precip_avg_mask) <- "prec"

compare_spatrasters(precip_avg_mask, hill)

autoplot(precip_avg_mask)

plot of chunk 20221212_prepare_precip

Creating a modified palette

We can now start representing our precipitation map. I chose here to create a custom palette with colorspace to better highlight the differences:


mypal <- sequential_hcl(
  n = 16,
  h = c(320, 80),
  c = c(60, 65, 20),
  l = c(30, 95), power = c(0.7, 1.3),
  rev = TRUE
)

show_col(mypal)

plot of chunk 20221212_mypal

And now we can create the final map showing if the rain in Spain stays mainly in the plain:

# Precipitation limits, rounded to 100
prec_limits <- floor(as.vector(minmax(precip_avg_mask)) / 100) * 100 + c(0, 100)


meteo_plot <- hill_plot +
  geom_spatraster(data = precip_avg_mask, maxcell = Inf) +
  # Overlay the_plain
  geom_spatvector(
    data = r_plain, color = alpha("black", 0.7),
    linewidth = .1
  ) +
  # This part is theming only
  scale_fill_gradientn(
    colours = alpha(mypal, 0.7),
    na.value = NA,
    labels = label_comma(),
    breaks = seq(0, prec_limits[2], 250)
  ) +
  guides(fill = guide_legend(
    direction = "horizontal",
    keyheight = .5,
    keywidth = 2,
    title.position = "right",
    label.position = "bottom",
    nrow = 1,
    family = "serif",
    title = " mm.",
    override.aes = list(alpha = 0.9)
  )) +
  labs(
    title = "Average yearly precipitation of Spain",
    subtitle = "The rain in Spain does not stay mainly in the plain"
  ) +
  theme_minimal(base_family = "serif") +
  theme(
    plot.background = element_rect(fill = "white", color = "white"),
    plot.title = element_text(
      face = "bold", size = base_text_size * 1.5,
      hjust = 0.5
    ),
    plot.subtitle = element_text(
      size = base_text_size,
      hjust = 0.5
    ),
    axis.text = element_text(size = base_text_size * 0.7, face = "italic"),
    legend.key = element_rect("grey50"),
    legend.position = "bottom",
    legend.title = element_text(size = base_text_size * .7),
    legend.text = element_text(size = base_text_size * .7),
    legend.spacing.x = unit(0, "pt")
  )


meteo_plot

plot of chunk 20221212_precip_end

We can now check that the rain in Spain falls mainly in the Atlantic coast (North of Spain) and specifically in Galicia. That’s why in Spanish the lyrics The rain in Spain stays mainly in the plain were translated into:

La lluvia en Sevilla es una pura maravilla.

That can be translated as “The rain in Seville is a true marvel”. And it is, indeed. Seville (located in the south on the Guadalquivir Valley has circa 50 rainy days per year, featuring very hot and dry summers.

Marginal plots (finally)

We can now start profiling our final plot. The idea is to create two bar charts, representing the value to be plotted (in this case, average annual precipitation) by longitude and latitude.

But first we add some additional margins and title axes to the main plot, so we can insert those marginal plots easily on our main plot:

# Now we can add titles on the secondary axis

plot_main <- meteo_plot +
  xlab("") +
  ylab("") +
  # titles on secondary axis, for later
  scale_x_continuous(sec.axis = dup_axis(
    name = "avg. precipitation by longitude"
  )) +
  scale_y_continuous(sec.axis = dup_axis(
    name = "avg. precipitation by latitude"
  )) +
  theme(
    axis.title.x = element_text(
      margin = margin(t = base_text_size),
      size = base_text_size * 0.9, face = "italic"
    ),
    axis.title.y = element_text(
      angle = 270,
      margin = margin(
        l = base_text_size,
        t = base_text_size
      ),
      size = base_text_size * 0.9, face = "italic"
    )
  )

plot_main

plot of chunk 20221212_precip_for_margin

Profiling marginal plots

On the following code, I am just drafting how the marginal plots would look like, so we can have a preview of the final result:


# Profiling marginal plots

# Getting averages by x,y
marg_x <- precip_avg_mask %>%
  as_tibble(xy = TRUE) %>%
  drop_na() %>%
  group_by(x) %>%
  summarise(avg = mean(prec))

marg_y <- precip_avg_mask %>%
  as_tibble(xy = TRUE) %>%
  drop_na() %>%
  group_by(y) %>%
  summarise(avg = mean(prec))


# Cowplot would delete axis, we create an axis at 1000 and 2000
br_4marginal <- c(1000, 2000)

labs <- data.frame(labs = paste(
  prettyNum(br_4marginal, big.mark = " "),
  "mm."
))

labs$for_x <- max(marg_x$x) - diff(range(marg_x$x)) * 0.05
labs$for_y <- min(marg_y$y) + diff(range(marg_y$y)) * 0.05
labs$y <- br_4marginal

# Profiling
ggplot() +
  geom_col(
    data = marg_x,
    aes(x, avg, fill = avg),
    color = NA,
    show.legend = FALSE
  ) +
  geom_text(
    data = labs, aes(x = for_x, y = y, label = labs),
    nudge_y = 100,
    size = 3
  ) +
  scale_fill_gradientn(
    colours = alpha(mypal, 0.9),
    na.value = NA,
    labels = label_comma(),
    limits = prec_limits
  ) +
  scale_y_continuous(
    breaks = br_4marginal,
    limits = c(0, max(br_4marginal) * 1.5)
  ) +
  theme_void() +
  theme(panel.grid.major.y = element_line(
    colour = "grey50",
    linetype = "dashed"
  ))

plot of chunk 20221212_profile_marg


ggplot() +
  geom_col(
    data = marg_y,
    aes(y, avg, fill = avg),
    color = NA,
    show.legend = FALSE
  ) +
  geom_text(
    data = labs, aes(x = for_y, y = y, label = labs),
    nudge_y = 100,
    angle = 270,
    size = 3
  ) +
  scale_fill_gradientn(
    colours = alpha(mypal, 0.9),
    na.value = NA,
    labels = label_comma(),
    limits = prec_limits
  ) +
  scale_y_continuous(
    breaks = br_4marginal,
    limits = c(0, max(br_4marginal) * 1.2)
  ) +
  coord_flip() +
  theme_void() +
  theme(panel.grid.major.x = element_line(
    colour = "grey50",
    linetype = "dashed"
  ))

plot of chunk 20221212_profile_marg

Putting all the pieces together

Finally, we would use cowplot::axis_canvas() to create the marginal plots as we want:


# Last step: We combine plots

# Marginal plots
plot_x <- axis_canvas(plot_main, axis = "x") +
  geom_col(
    data = marg_x,
    aes(x, avg, fill = avg),
    color = NA,
    show.legend = FALSE
  ) +
  geom_text(
    data = labs, aes(x = for_x, y = y, label = labs),
    # Adjust the position of the labels
    nudge_y = 300,
    family = "serif",
    fontface = "italic",
    size = base_text_size * 0.2
  ) +
  scale_fill_gradientn(
    colours = alpha(mypal, 0.9),
    na.value = NA,
    labels = label_comma(),
    limits = prec_limits
  ) +
  scale_y_continuous(
    breaks = br_4marginal,
    limits = c(0, max(br_4marginal) * 1.5)
  ) +
  theme_void() +
  theme(panel.grid.major.y = element_line(
    colour = "grey50",
    linetype = "dashed",
    linewidth = 0.1
  ))

plot_x

plot of chunk 20221212_prepare_axis


plot_y <- axis_canvas(plot_main, axis = "y", coord_flip = TRUE) +
  geom_col(
    data = marg_y,
    aes(y, avg, fill = avg),
    color = NA,
    show.legend = FALSE
  ) +
  geom_text(
    data = labs, aes(x = for_y, y = y, label = labs),
    # Adjust the position of the labels
    nudge_y = 300,
    angle = 270,
    family = "serif",
    fontface = "italic",
    size = base_text_size * 0.2
  ) +
  scale_fill_gradientn(
    limits = prec_limits,
    colours = alpha(mypal, 0.9),
    na.value = NA,
    labels = label_comma()
  ) +
  scale_y_continuous(
    breaks = br_4marginal,
    limits = c(0, max(br_4marginal) * 1.5)
  ) +
  coord_flip() +
  theme_void() +
  theme(panel.grid.major.x = element_line(
    colour = "grey50",
    linetype = "dashed",
    linewidth = 0.1
  ))
plot_y

plot of chunk 20221212_prepare_axis

And insert everything in the main plot. See the final result:


# Combine all plots into one
sizes_axis <- grid::unit(.3, "null")

plot_final <- insert_xaxis_grob(plot_main, plot_x,
  position = "top",
  height = sizes_axis
)
plot_final <- insert_yaxis_grob(plot_final, plot_y,
  position = "right",
  width = sizes_axis * 1.25
)

gg_final <- ggdraw(plot_final)
gg_final

plot of chunk 20221212_finalplot

And with a bit of effort we got it.

Recap

Much of the code we have created relates with the theming and labels of the plot. Here you can find a simplified version:

Simplified version

# Libraries
# Data manipulation
library(terra)
library(tidyterra)
library(dplyr)

# Get the data
library(geodata)

# Plotting
library(ggplot2)
library(scales)
library(cowplot)
library(colorspace)

# Get the data
mydir <- "~/R/mapslib/misc"

r <- elevation_30s("ESP", path = mydir) %>%
  rename(alt = 1) %>%
  mutate(alt = pmax(0, alt))

# Creating hillshade

slope <- terrain(r, "slope", unit = "radians")
aspect <- terrain(r, "aspect", unit = "radians")
hill <- shade(slope, aspect, 30, 45)

# normalize names
names(hill) <- "shades"

# Hillshading palette
pal_greys <- hcl.colors(1000, "Grays")

# Index of color by cell
index <- hill %>%
  mutate(index_col = rescale(shades, to = c(1, length(pal_greys)))) %>%
  mutate(index_col = round(index_col)) %>%
  pull(index_col)


# Get cols
vector_cols <- pal_greys[index]


# Base hill plot
hill_plot <- ggplot() +
  geom_spatraster(
    data = hill, fill = vector_cols, maxcell = Inf,
    alpha = 1
  ) +
  theme_minimal()

# Overlay
precip <- geodata::worldclim_country("ESP", "prec", mydir)
precip_end <- sum(precip) %>%
  project(hill) %>%
  crop(hill) %>%
  mask(hill) %>%
  rename(prec = 1)

p_range <- as.vector(minmax(precip_end))

mypal <- sequential_hcl(
  n = 16,
  h = c(320, 80),
  c = c(60, 65, 20),
  l = c(30, 95), power = c(0.7, 1.3),
  rev = TRUE
)


base_plot <- hill_plot +
  geom_spatraster(data = precip_end, maxcell = Inf) +
  scale_fill_gradientn(
    colors = alpha(mypal, 0.7), na.value = NA,
    limits = p_range
  )

# Marginal plots
# Data
marg_x <- precip_end %>%
  as_tibble(xy = TRUE) %>%
  drop_na() %>%
  group_by(x) %>%
  summarise(avg = mean(prec))

marg_y <- precip_end %>%
  as_tibble(xy = TRUE) %>%
  drop_na() %>%
  group_by(y) %>%
  summarise(avg = mean(prec))

# Adding marginal plots

plot_x <- axis_canvas(base_plot, axis = "x") +
  geom_col(
    data = marg_x,
    aes(x, avg, fill = avg),
    color = NA,
    show.legend = FALSE
  ) +
  scale_fill_gradientn(
    colours = alpha(mypal, 0.9),
    na.value = NA,
    limits = p_range
  ) +
  theme_void()

plot_y <- axis_canvas(base_plot, axis = "y", coord_flip = TRUE) +
  geom_col(
    data = marg_y,
    aes(y, avg, fill = avg),
    color = NA,
    show.legend = FALSE
  ) +
  scale_fill_gradientn(
    colours = alpha(mypal, 0.9),
    na.value = NA,
    limits = p_range
  ) +
  theme_void() +
  coord_flip()

# All pieces together
sizes_axis <- grid::unit(.3, "null")

plot_final_simp <- insert_xaxis_grob(base_plot, plot_x,
  position = "top",
  height = sizes_axis
)
plot_final_simp <- insert_yaxis_grob(plot_final_simp, plot_y,
  position = "right",
  width = sizes_axis * 1.25
)

gg_final_simp <- ggdraw(plot_final_simp)
gg_final_simp

plot of chunk 20221212_simplified

]]>
https://dieghernan.github.io/202212_tidyterra-hillshade-2/ posts ggplot2insetmapSpainmapsr_bloggersrspatialrstatsterratidyterra https://dieghernan.github.io/202212_tidyterra-hillshade-2/ Mon, 12 Dec 2022 00:00:00 +0100
Hillshade, colors and marginal plots with tidyterra (I) How to overlay SpatRasters

8 min.

This is the first post of a series of two, showing how to overlay a SpatRaster on top of a Hillshade background. Next post would show how to add marginal plots including information of the values of the raster by longitude and latitude. See the second post here.

Using shadow effects on relief mappings is a very common technique, that allows to produce informative yet beautiful maps. If you are interested on this topic and you work with R, you would have probably seen this map:

swissmap

The production of this map by Timo Grossenbacher has been a reference for years. However, last developments on the R package ecosystem (terra, sf and support of both classes on ggplot2, development of ggnewscale, etc.) can make even easier the task of producing such type of maps.

In fact, Dominic Royé recently wrote a very detailed post on creating shadow effects on map reliefs. On this first post of the series I would replicate that technique with a slight variation (e.g. not making use of ggnewscale) and I would discuss a bit on the potential choice of a color palette for this kind of maps.

Libraries

I would use the following libraries:


## Libraries

library(terra)
library(tidyterra)
library(ggplot2)
library(dplyr)
library(scales)

# Get the data
library(geodata)

Get the data

First step is to get the altitude data. I use here the package geodata for simplicity, but you can use as well elevatr that is much more complete. However elevatr produces the result as RasterLayers, so you would need to convert the object to SpatRaster with terra::rast().


# Cache map data
mydir <- "~/R/mapslib/misc"

r_init <- elevation_30s("ROU", path = mydir)

r_init
#> class       : SpatRaster 
#> dimensions  : 588, 1176, 1  (nrow, ncol, nlyr)
#> resolution  : 0.008333333, 0.008333333  (x, y)
#> extent      : 20.1, 29.9, 43.5, 48.4  (xmin, xmax, ymin, ymax)
#> coord. ref. : lon/lat WGS 84 (EPSG:4326) 
#> source      : ROU_elv_msk.tif 
#> name        : ROU_elv_msk 
#> min value   :          -4 
#> max value   :        2481

# For better handling we set here the names
names(r_init) <- "alt"

# We don't want values lower than 0
r <- r_init %>%
  mutate(alt = pmax(0, alt))

r
#> class       : SpatRaster 
#> dimensions  : 588, 1176, 1  (nrow, ncol, nlyr)
#> resolution  : 0.008333333, 0.008333333  (x, y)
#> extent      : 20.1, 29.9, 43.5, 48.4  (xmin, xmax, ymin, ymax)
#> coord. ref. : lon/lat WGS 84 (EPSG:4326) 
#> source      : memory 
#> name        :  alt 
#> min value   :    0 
#> max value   : 2481

We can now have a quick look to the plot with tidyterra::autoplot():

# Quick look
autoplot(r) +
  theme_minimal()

plot of chunk 20221017-1-autoplot

Hillshading

Next step is to calculate the hillshade. Royé has a very detailed discussion here, so I would not go into details. Basically what we want to create is a layer that approximates the potential “texture” of the surface based on the elevation and the sun position. This is straightforward with terra::terrain() and terra::shade() functions:


## Create hillshade effect

slope <- terrain(r, "slope", unit = "radians")
aspect <- terrain(r, "aspect", unit = "radians")
hill <- shade(slope, aspect, 30, 270)

# normalize names
names(hill) <- "shades"

# Hillshading, but we need a palette
pal_greys <- hcl.colors(1000, "Grays")

ggplot() +
  geom_spatraster(data = hill) +
  scale_fill_gradientn(colors = pal_greys, na.value = NA)
#> SpatRaster resampled to ncells = 501501

plot of chunk 20221017-2-hillroye

We can also do the following hack to avoid the use of a scale_fill_* (via ggplot2 or via ggnewscale::new_scale_fill()):

  • Select a vector of colors (in this post pal_greys).
  • Extract the values of the raster and reescale them to the length of the palette (c(1, 1000)).
  • Round those rescaled values to the nearest integer. So we would have a index indicating which value of pal_greys should be mapped to each cell.
  • Now use the parameter fill on the geom_ instead of using the scale.

An additional note is that geom_spatraster() has a parameter maxcell that would perform a spatial resampling if the raster has more cells than maxcell. This is for optimization (note that terra::plot() has the same setup and that the users often forgot about it), but we can force to plot all the cells by using maxcell = Inf. On this approach for using fill the value maxcell needs to be effectively set to Inf to ensure that the number of color values and the number of cells is the same.


# Use a vector of colors


index <- hill %>%
  mutate(index_col = rescale(shades, to = c(1, length(pal_greys)))) %>%
  mutate(index_col = round(index_col)) %>%
  pull(index_col)


# Get cols
vector_cols <- pal_greys[index]

# Need to avoid resampling
# and dont use aes

hill_plot <- ggplot() +
  geom_spatraster(
    data = hill, fill = vector_cols, maxcell = Inf,
    alpha = 1
  )

hill_plot

plot of chunk 20221017-3-hillalt

Selecting colors

The selection of colors for elevation maps is a key aspect when designing this kind of visualization since colors can be confused with environmental phenomena (Patterson and Jenny, 2011). For example, by convention green colors are associated to low elevations while orange, browns and whites are associared to high elevations on some of the most common elevation palettes (aka hypsometric tints). See for example the Wikipedia Topographic maps conventions.

This is not ideal, since greens can be confused with forests, for example, so an elevation map of desertic areas would not be appropiated with a green-brown-white color scheme.

There is an additional point to take into account when designing color palettes for maps. A regular gradient would just interpolate colors assuming that the distance among colors is the same:


# Regular gradient
grad <- hypso.colors(10, "dem_poster")

autoplot(r) +
  scale_fill_gradientn(colours = grad, na.value = NA)

plot of chunk 20221017-regular-gradient

For that reason, tidyterra provides additional gradients whose colors are placed unevenly with the goal of providing a better understanding of the maps:


# Hypso gradient
grad_hypso <- hypso.colors2(10, "dem_poster")


autoplot(r) +
  scale_fill_gradientn(colours = grad_hypso, na.value = NA)

plot of chunk 20221017-hypso-gradient

Can you notice the difference? In the first map greens are the dominant color. However greens are representing a wide range of elevations (0-750 meters) that correspond with most of the territory. In terms of perception, we won’t be clearly spotting elevation differences in the center of the country, while with the uneven gradient greens only correspond to the range (0 - 250 meters) and the overall perception of elevation improves. Note that the only difference between plots is exclusively the color palette.

For producing our map we are going to assess visually the result of a selection of palettes provided by tidyterra. We use here the version tidyterra::scale_fill_hypso_tint_c() instead of tidyterra::scale_fill_hypso_c() for taking advantage of the uneven color gradients.

A downside of using this scales is that we need also to adjust the limits argument of the functions to make ggplot2 aware of the limits of the value of the raster. This is easily achieved with terra::minmax() but I added an extra touch rounding up and down the range of values to the nearest 500.


# Try some options, but we need to be aware of the values of our raster

r_limits <- minmax(r) %>% as.vector()

# Rounded to lower and upper 500
r_limits <- c(floor(r_limits[1] / 500), ceiling(r_limits[2] / 500)) * 500

# And making min value to 0.
r_limits <- pmax(r_limits, 0)

# Compare
minmax(r) %>% as.vector()
#> [1]    0 2481
r_limits
#> [1]    0 2500


# Now lets have some fun with scales from tidyterra

elevt_test <- ggplot() +
  geom_spatraster(data = r)

# Create a helper function

plot_pal_test <- function(pal) {
  elevt_test +
    scale_fill_hypso_tint_c(
      limits = r_limits,
      palette = pal
    ) +
    ggtitle(pal) +
    theme_minimal()
}

plot_pal_test("etopo1_hypso")
plot_pal_test("dem_poster")
plot_pal_test("spain")
plot_pal_test("pakistan")
plot_pal_test("utah_1")
plot_pal_test("wiki-2.0_hypso")

plot of chunk 20221017-4-explorepalsplot of chunk 20221017-4-explorepalsplot of chunk 20221017-4-explorepalsplot of chunk 20221017-4-explorepalsplot of chunk 20221017-4-explorepalsplot of chunk 20221017-4-explorepals

I finally selected for my plot the "dem_poster" palette, but this is completely a personal choice. You should select the palette you feel more comfortable with. See the full range of color palettes provided by tidyterra here.

Final plot

So now it is time to blend both the hillshade layer and the altitude layer using some level of alpha on the upper layer.


base_plot <- hill_plot +
  # Avoid resampling with maxcell
  geom_spatraster(data = r, maxcell = Inf) +
  scale_fill_hypso_tint_c(
    limits = r_limits,
    palette = "dem_poster",
    alpha = 0.4,
    labels = label_comma(),
    # For the legend I use custom breaks
    breaks = c(
      seq(0, 500, 100),
      seq(750, 1500, 250),
      2000
    )
  )

base_plot

plot of chunk 20221017-5-blend

And with a bit of trickery and theming we can have our final map. First we load a font from Google with a custom function:

myload_fonts <- function(fontname, family,
                         fontdir = tempdir()) {
  fontname_url <- utils::URLencode(fontname)
  fontzip <- tempfile(fileext = ".zip")
  download.file(paste0("https://fonts.google.com/download?family=", fontname_url),
    fontzip,
    quiet = TRUE,
    mode = "wb"
  )
  unzip(fontzip,
    exdir = fontdir,
    junkpaths = TRUE
  )

  # Load fonts
  paths <- list(
    regular = "Regular.ttf",
    bold = "Bold.ttf",
    italic = "Italic.ttf",
    bolditalic = "BoldItalic.ttf"
  )


  namefile <- gsub(" ", "", fontname)
  paths_end <- file.path(
    fontdir,
    paste(namefile, paths, sep = "-")
  )


  names(paths_end) <- names(paths)

  sysfonts::font_add(family,
    regular = paths_end["regular"],
    bold = paths_end["bold"],
    italic = paths_end["italic"],
    bolditalic = paths_end["bolditalic"]
  )

  return(invisible())
}

And now we theme it:

# Theming
myload_fonts("Noto Serif", "notoserif", "~/R/googlefonts")
showtext::showtext_auto()

# Adjust text size
base_text_size <- 30

base_plot +
  # Change guide
  guides(fill = guide_legend(
    title = "   m.",
    direction = "horizontal",
    nrow = 1,
    keywidth = 1.75,
    keyheight = 0.5,
    label.position = "bottom",
    title.position = "right",
    override.aes = list(alpha = 1)
  )) +
  labs(
    title = "Elevation of Romania",
    subtitle = "Hillshade and hypsometric tint blend",
    caption = paste0(
      "@dhernangomez using tidyterra, ggplot2, geodata R packages.",
      " Data: Shuttle Radar Topography Mission (SRTM)"
    )
  ) +
  theme_minimal(base_family = "notoserif") +
  theme(
    plot.background = element_rect("grey97", colour = NA),
    plot.margin = margin(20, 20, 20, 20),
    plot.caption = element_text(size = base_text_size * 0.5),
    plot.title = element_text(face = "bold", size = base_text_size * 1.4),
    plot.subtitle = element_text(
      margin = margin(b = 10),
      size = base_text_size
    ),
    axis.text = element_text(size = base_text_size * 0.7),
    legend.position = "bottom",
    legend.title = element_text(size = base_text_size * 0.8),
    legend.text = element_text(size = base_text_size * 0.8),
    legend.key = element_rect("grey50"),
    legend.spacing.x = unit(0, "pt")
  )

plot of chunk 20221017-6-finalplot

References

Patterson T, Jenny B (2011). “The Development and Rationale of Cross-blended Hypsometric Tints.” Cartographic Perspectives, 31–46. https://doi.org/10.14714/CP69.20

Grossenbacher T (2016). “Beautiful thematic maps with ggplot2 (only).” https://timogrossenbacher.ch/bivariate-maps-with-ggplot2-and-sf/.

Royé D (2022). “Hillshade effects.” https://dominicroye.github.io/en/2022/hillshade-effects/.

Hernangómez D (2022). tidyterra: tidyverse Methods and ggplot2 Helpers for terra Objects. <doi:10.5281/zenodo.6572471> https://doi.org/10.5281/zenodo.6572471.

]]>
https://dieghernan.github.io/202210_tidyterra-hillshade/ posts ggplot2mapsr_bloggersrspatialrstatsterratidyterra https://dieghernan.github.io/202210_tidyterra-hillshade/ Mon, 17 Oct 2022 00:00:00 +0200
Introducing tidyterra Easily work and ggplot SpatRasters

4 min.

If you have been playing around with R for a while, probably you are familiarized with the volcano dataset:


data("volcano")
image(volcano, col = terrain.colors(256, rev = TRUE))

plot of chunk 20220525_volcano

This represents the topographic information of one of the volcanoes of Auckland (New Zealand), specifically Maungawhau / Mount Eden. But do you know that this map is flipped?

On this post I introduce the tidyterra package, recently added to CRAN and I would show you how to geotag the volcano dataset. We would produce also ggplot2 maps using the functions of tidyterra.

# Libraries
library(terra)
library(ggplot2)
library(tidyterra)
library(maptiles)
library(sf)

Wait, volcano is flipped?

Let’s check it out. Thanks to the package maptiles we can have a glimpse of the location of Maungawhau using map tiles (as Google Maps uses). We would use tidyterra for displaying the map tile:


# location of Maungawhau

box <- c(
  174.7611552780,
  -36.8799200525,
  174.7682380109,
  -36.8719519780
)
class(box) <- "bbox"
box <- st_as_sfc(box)
st_crs(box) <- 4326

box <- box %>%
  # To crs for NZGD49
  st_transform(27200)

tile <- get_tiles(box, crop = TRUE, zoom = 16)


ggtile <- ggplot() +
  geom_spatraster_rgb(data = tile)

ggtile

plot of chunk 20220525_tile

So well, here you go. A neat and crisp RGB tile of Maungawhau. Now, the next question is, how to match the volcano dataset (a matrix) with this tile (a geo-tagged map tile)? Let’s check it out

Working with SpatRasters

Thanks to the terra package we can start converting volcano into a SpatRaster:


volcano_rast <- rast(volcano)

terra::plot(volcano_rast)

plot of chunk 20220525_volcano_raster


# Wait, it is flipped!
volcano_rast_ok <- rast(volcano[
  seq(nrow(volcano), 1, -1),
  seq(ncol(volcano), 1, -1)
])

# Much better!
terra::plot(volcano_rast_ok)

plot of chunk 20220525_volcano_raster


volcano_rast_ok
#> class       : SpatRaster 
#> dimensions  : 87, 61, 1  (nrow, ncol, nlyr)
#> resolution  : 1, 1  (x, y)
#> extent      : 0, 61, 0, 87  (xmin, xmax, ymin, ymax)
#> coord. ref. :  
#> source      : memory 
#> name        : lyr.1 
#> min value   :    94 
#> max value   :   195

Nice! Now we have a raster of volcano, but still without geotagged information. Thanks to this article of Tomislav Hengl (\@tom_hengl) we can check the basic geographic parameters of volcano (see Volcano Maungawhau), that are:

  • CRS: EPSG:27200
  • xllcorner: 2667400
  • yllcorner: 6478700
  • cellsize: 10 m
  • ncols: 61
  • nrows: 87

And we can translate that easily to an empty SpatRaster:


# Extra length for proper handling extent
xrange <- range(seq(from = 2667400, length.out = 62, by = 10))
yrange <- range(seq(from = 6478700, length.out = 88, by = 10))

template <- rast(
  crs = "EPSG:27200",
  xmin = xrange[1],
  xmax = xrange[2],
  ymin = yrange[1],
  ymax = yrange[2],
  resolution = 10
)
template
#> class       : SpatRaster 
#> dimensions  : 87, 61, 1  (nrow, ncol, nlyr)
#> resolution  : 10, 10  (x, y)
#> extent      : 2667400, 2668010, 6478700, 6479570  (xmin, xmax, ymin, ymax)
#> coord. ref. : NZGD49 / New Zealand Map Grid (EPSG:27200)

So now we only need to transfer the values from volcano_rast_ok to our template:


# Use tidyterra for pull the values of one raster
# and create a new layer

volcano2 <- template %>%
  mutate(elevation = pull(volcano_rast_ok, lyr.1)) %>%
  select(elevation)

volcano2
#> class       : SpatRaster 
#> dimensions  : 87, 61, 1  (nrow, ncol, nlyr)
#> resolution  : 10, 10  (x, y)
#> extent      : 2667400, 2668010, 6478700, 6479570  (xmin, xmax, ymin, ymax)
#> coord. ref. : NZGD49 / New Zealand Map Grid (EPSG:27200) 
#> source      : memory 
#> name        : elevation 
#> min value   :        94 
#> max value   :       195

terra::plot(volcano2)

plot of chunk 20220525_create_volcano2


# And plot it
ggtile +
  geom_spatraster(data = volcano2) +
  scale_fill_terrain_c(alpha = 0.75)

plot of chunk 20220525_create_volcano2

An Easter egg

The volcano dataset may not be completely up to date. As a compliment, tidyterra includes a .tif file with the same dimensions that our volcano2 raster, but with the topographic values extracted from Auckland LiDAR 1m DEM (2013) and resampled to a resolution of 5x5 meters, for package size optimization. See here how to load it and check the plotting tidyterra possibilities:


# Load out Easter Egg

volcano2_easter <- rast(system.file("extdata/volcano2.tif",
  package = "tidyterra"
))

volcano2_easter
#> class       : SpatRaster 
#> dimensions  : 174, 122, 1  (nrow, ncol, nlyr)
#> resolution  : 5, 5  (x, y)
#> extent      : 1756969, 1757579, 5917003, 5917873  (xmin, xmax, ymin, ymax)
#> coord. ref. : NZGD2000 / New Zealand Transverse Mercator 2000 (EPSG:2193) 
#> source      : volcano2.tif 
#> name        : elevation 
#> min value   :  76.26222 
#> max value   :  195.5542
terra::plot(volcano2_easter)

plot of chunk 20220525_easteregg



# Only altitudes of more than 130m

volcano_filter <- volcano2_easter %>%
  filter(elevation > 130)


ggtile +
  geom_spatraster(data = volcano_filter) +
  scale_fill_viridis_c(na.value = NA, alpha = 0.7) +
  labs(fill = "Elevation (m)")

plot of chunk 20220525_easteregg



# Contour lines

ggtile +
  geom_spatraster_contour(data = volcano2_easter, binwidth = 10)

plot of chunk 20220525_easteregg



# Contour lines + contour polygons

ggtile +
  geom_spatraster_contour_filled(
    data = volcano2_easter,
    breaks = seq(70, 210, 20),
    alpha = 0.7
  ) +
  geom_spatraster_contour(
    data = volcano2_easter, binwidth = 2.5,
    alpha = 0.7, size = .2, color = "grey10"
  ) +
  coord_sf(expand = FALSE)

plot of chunk 20220525_easteregg

]]>
https://dieghernan.github.io/202205_tidyterra/ posts ggplot2mapsmaptilesr_bloggersr_packagerspatialrstatsterratidyterra https://dieghernan.github.io/202205_tidyterra/ Wed, 25 May 2022 00:00:00 +0200
Unknown pleasures with R Joyplot elevation maps with ggridges and terra

8 min.

On 1970, Harold D. Craft Jr. published his Ph.D thesis “Radio observations of the pulse profiles and dispersion measures of twelve pulsars”. The thesis (337 pages) includes on pages 214 to 216 the following depictions of successive pulses of some pulsars:

Craft
Pulsars

From “Radio Observations of the Pulse Profiles and Dispersion Measures of Twelve Pulsars” by Harold D. Craft, Jr. (September 1970). Original source: https://blogs.scientificamerican.com/sa-visual/pop-culture-pulsar-origin-story-of-joy-division-s-unknown-pleasures-album-cover-video/

Nine years later, a young graphic designer named Peter Saville had a new project. He had to design the cover of the debut album of a young British rock band, named Joy Division. At some point of the process Bernand Summer (lead guitar of Joy Division)1, found the following image on The Cambridge Encyclopaedia of Astronomy (1977 edition):

Cambridge: CP 1919 Pulsar
Image

Saville presented a black and white version, producing a cover that reaches an iconic status in the ’80s. This cover has been reproduced in the form of tattoos, fashion clothes, merchandising, video games and even 3-D sculptures:

Joy Division - Unknown pleasure
cover

If you are interested on knowing more about this fascinating history of science and design you can find it on Pop Culture Pulsar: Origin Story of Joy Division’s Unknown Pleasures Album Cover by Jen Christiansen.

Since then, this kind of plots have become very popular, being known as ridge plots or joyplots, in honor of Joy Division. On this post, I would produce “joyplots” with R for specific regions of the world using the elevation data for creating the ridges.

Creating joyplot maps with R

This topic has already been covered by other authors, as Daniel Redondo (Spanish) and Travis M. White. However, they both use QGIS, while on this post I would work completely on R.

Some initial considerations we may need to bear in mind:

  • On this post I will use geom_ridgline() instead of geom_density_ridges(). This would provide us with more control on the final plot, but it has a point of attention: both the coordinates and the elevation should be in the same unit (See why ). Therefore we should project both the raster and the base sf object on a suitable CRS defined in meters (in this case).

  • Joyplots are much cooler when only a few lines are displayed. This is directly related with the number of rows of our raster. A very detailed raster (e.g. lots of rows) would produce a much detailed plot but it may not suit our needs.

Now, let’s start with the required libraries:


# Libraries

# Spatial
library(sf)
library(terra)
library(giscoR) # Shapes
library(elevatr)

# Data viz and wrangling
library(ggplot2)
library(dplyr)
library(ggridges)

The first step consists on selecting our region of interest (sf object) and extracting the elevation data. We can achieve that with giscoR and elevatr. In this post I would create a joyplot of Andalusia. Note that, given we are creating just a visualization, the resolution of the sf object is not very relevant.


# Select a Spanish Region: Andalucia
region <- gisco_get_nuts(nuts_id = "ES61") %>%
  # And project data
  st_transform(25830)

Now we need to extract the elevation using elevatr. We can also adjust the zoom level as needed. You can find a good guidance on the zoom levels on the OpenStreetMaps wiki.


dem <- get_elev_raster(region, z = 7, clip = "bbox", expand = 10000) %>%
  # And convert to terra
  rast() %>%
  # Mask to the shape
  mask(vect(region))

# Rename layer for further manipulation
names(dem) <- "elev"

nrow(dem)
#> [1] 698

terra::plot(dem)

plot of chunk 20220501_andalucia_dem

We already have our elevation raster. Now the next step is to adjust the number of rows of our raster to a lower number. We can then aggregate the raster (i.e. reduce the number of cells or increasing the size of the cells) using a scaling factor that would reduce the number of rows to our desired target (in this case 90 rows):

# Approx
factor <- round(nrow(dem) / 90)

dem_agg <- aggregate(dem, factor)

nrow(dem_agg)
#> [1] 88

terra::plot(dem_agg)

plot of chunk 20220501_andalucia_dem_agg

We can check how the number of rows have decreased. Also, the plot shows that we have now less cells.

Now, we may need to perform additional manipulations on the values of the raster:

  • We need to ensure that all the valid values are equal or greater than zero.

  • We would replace the NAs produced when masking the raster to zero. We would use this later to decide whether to remove or not some parts of the plot.

After that, we would create a data frame with the information needed for creating the joyplot.


dem_agg[dem_agg < 0] <- 0
dem_agg[is.na(dem_agg)] <- 0

dem_df <- as.data.frame(dem_agg, xy = TRUE, na.rm = FALSE)

as_tibble(dem_df)
#> # A tibble: 12,848 x 3
#>          x        y  elev
#>      <dbl>    <dbl> <dbl>
#>  1  77184. 4306315.     0
#>  2  81049. 4306315.     0
#>  3  84914. 4306315.     0
#>  4  88779. 4306315.     0
#>  5  92644. 4306315.     0
#>  6  96510. 4306315.     0
#>  7 100375. 4306315.     0
#>  8 104240. 4306315.     0
#>  9 108105. 4306315.     0
#> 10 111970. 4306315.     0
#> # ... with 12,838 more rows

Now is a good moment to adjust the units of the coordinates and the elevation if needed. In this case both are in meters, but I would show you how to perform those adjustment with the units package:


library(units)

# Units of DEM projection
units_crs <- st_crs(dem_agg)$units

units_crs
#> [1] "m"

# Example, convert to miles
# Adjust as needed

dem_miles <- dem_df %>%
  mutate(
    x = set_units(x, "m"),
    x_mile = set_units(x, "mi"),
    y = set_units(y, "m"),
    y_mile = set_units(y, "mi"),
    elev = set_units(elev, "m"),
    elev_mile = set_units(elev, "mi")
  )

as_tibble(dem_miles)
#> # A tibble: 12,848 x 6
#>          x        y elev x_mile y_mile elev_mile
#>        [m]      [m]  [m]   [mi]   [mi]      [mi]
#>  1  77184. 4306315.    0   48.0  2676.         0
#>  2  81049. 4306315.    0   50.4  2676.         0
#>  3  84914. 4306315.    0   52.8  2676.         0
#>  4  88779. 4306315.    0   55.2  2676.         0
#>  5  92644. 4306315.    0   57.6  2676.         0
#>  6  96510. 4306315.    0   60.0  2676.         0
#>  7 100375. 4306315.    0   62.4  2676.         0
#>  8 104240. 4306315.    0   64.8  2676.         0
#>  9 108105. 4306315.    0   67.2  2676.         0
#> 10 111970. 4306315.    0   69.6  2676.         0
#> # ... with 12,838 more rows

Finally, we can create our joyplot. Note that we can “train” the scales of our ggplot to an spatial object automatically if we pass our region object into the plot. The relative height of the ridges is controlled via the scale parameter:


ggplot() +
  # Just for the scales, pass with NA arguments so it is not shown
  geom_sf(data = region, color = NA, fill = NA) +
  geom_ridgeline(
    data = dem_df, aes(
      x = x, y = y,
      group = y,
      height = elev
    ),
    scale = 25
  ) +
  theme_ridges()

plot of chunk 20220501_andalucia_ridges

The last step is to provide a black theme, resembling the cover of the album:


ggplot() +
  geom_sf(data = region, color = NA, fill = NA) +
  geom_ridgeline(
    data = dem_df, aes(
      x = x, y = y,
      group = y,
      height = elev
    ),
    scale = 25,
    fill = "black",
    color = "white",
    size = .25
  ) +
  theme_void() +
  theme(plot.background = element_rect(fill = "black"))

plot of chunk 20220501_andalucia_joyplot

Variations

We can produce some variations of the same map using several parameters and other artifacts.

Using geom_density_ridges()

We can use geom_density_ridges() with stat="identity" instead of geom_ridgeline() for creating a similar map:


ggplot() +
  geom_sf(data = region, color = NA, fill = NA) +
  geom_density_ridges(
    data = dem_df, aes(
      x = x, y = y,
      group = y,
      height = elev
    ),
    stat = "identity",
    scale = 25,
    fill = "black",
    color = "white",
    size = .25
  ) +
  theme_void() +
  theme(plot.background = element_rect(fill = "black"))

plot of chunk 20220501_andalucia_ridges_dens

Land only

If we use geom_ridgeline() it is quite easy to remove some parts of the lines, as the parameter min_height allow us to control the minimum height to be plotted. I found this much more difficult when using geom_density_ridges(), where the equivalent parameter rel_min_height is relative to the overall maximum height.

This is the main reason why we replaced the NA values with zeros, so those parts of the string can be easily removed.

ggplot() +
  geom_sf(data = region, color = NA, fill = NA) +
  geom_ridgeline(
    data = dem_df, aes(
      x = x, y = y,
      group = y,
      height = elev
    ),
    scale = 25,
    fill = "black",
    color = "white",
    size = .25,
    min_height = 0.1
  ) +
  theme_void() +
  theme(plot.background = element_rect(fill = "black"))

plot of chunk 20220501_andalucia_landonly

With colors

We can apply different colors to the plot. Note that ggridges only accepts different aes by row, and not by column:


# Classify on three different bands

dem_df <- dem_df %>%
  mutate(class = cut_number(y, n = 3))

ggplot() +
  geom_sf(data = region, color = NA, fill = NA) +
  geom_ridgeline(
    data = dem_df, aes(
      x = x, y = y,
      group = y,
      height = elev,
      color = class
    ),
    scale = 25,
    fill = "black",
    size = .5,
    show.legend = FALSE
  ) +
  theme_void() +
  theme(plot.background = element_rect(fill = "black")) +
  scale_color_manual(values = alpha(
    c(
      "#007A33",
      "white",
      "#007A33"
    ),
    .95
  ))

plot of chunk 20220501_andalucia_colors

Combine with another objects

Like using a sf object:


highres <- gisco_get_nuts(
  nuts_id = "ES61",
  resolution = 1
) %>%
  # And project data
  st_transform(25830) %>%
  st_buffer(-5000)

ggplot() +
  geom_sf(data = highres, color = NA, fill = "#007A33", alpha = 0.95) +
  geom_ridgeline(
    data = dem_df, aes(
      x = x, y = y,
      group = y,
      height = elev
    ),
    scale = 25,
    fill = "black",
    color = "white",
    size = .25,
    min_height = 0.1
  ) +
  theme_void() +
  theme(plot.background = element_rect(fill = "black"))

plot of chunk 20220501_andalucia_combine

Or maybe adding a frame to the plot


frame <- as.polygons(dem_agg, extent = TRUE) %>%
  st_as_sf()

ggplot() +
  geom_sf(data = frame, color = "lightblue", fill = NA, size = 2) +
  geom_ridgeline(
    data = dem_df, aes(
      x = x, y = y,
      group = y,
      height = elev
    ),
    scale = 25,
    fill = "black",
    color = "white",
    size = .25
  ) +
  theme_void() +
  theme(plot.background = element_rect(fill = "black"))

plot of chunk 20220501_andalucia_frame

References

Craft Jr, H. D. (1970). Radio observations of the pulse profiles and dispersion measures of twelve pulsars. Cornell University.

Mitton, Simon (1977). The Cambridge encyclopaedia of astronomy. Prentice-Hall of Canada.

Lipez, Zachary (2019, June 14). “How Joy Division’s ‘Unknown Pleasures`’ image went from underground album cover to a piece of cultural ubiquity” The Washington Post. https://wapo.st/3K6Chsc

White, T. M. (2019). Cartographic Pleasures: Maps Inspired by Joy Division’s Unknown Pleasures Album Art. Cartographic Perspectives, (92), 65–78. https://doi.org/10.14714/CP92.1536

Redondo, Daniel (2020, January 25). “Mapas estilo Joy Division con QGIS y R.” https://danielredondo.com/blog/2020-01-25-joy_division/

  1. Other versions of the story credit drummer Stephen Morris for finding it. 

]]>
https://dieghernan.github.io/202205_Unknown-pleasures-R/ posts ggplot2ggridgesgiscoRmapsr_bloggersrspatialrstatsterra https://dieghernan.github.io/202205_Unknown-pleasures-R/ Sun, 01 May 2022 00:00:00 +0200
Corona timelapse Travel restrictions amidst the COVID crisis across time - A German perspective

1 min.

Project discontinued

corona-timelapse

In April 2021, my brother Diego and I saw the need for a friendly and automated interface to the meticulous and ever-changing restrictions that the German authorities imposed to travels abroad amid the COVID crisis.

Despite the drastic variations in this regard that most foreign countries have experienced during last year, the mild character of the currently predominant Omicron variant has finally led Germany to lift all COVID-related obstacles to international travel. The long-awaited contemplation of an all-green world map has filled us with joy, but, to be honest, it has also rendered our website quite boring, no matter how many languages we have translated it into (6 as of now 🇩🇪 🇬🇧 🇪🇸 🇫🇷 🇵🇱 🇹🇷).

To compensate for this, we have put all the pieces together to produce a chronology of the COVID crisis through the lenses of the Robert Koch Institut (RKI), the German entity that was responsible for COVID risk assessment of international areas. Most of the effort was already made, since we have been scraping RKI’s information (via scrapy) for almost one year. To complete the puzzle, we only needed to apply our method to the old risk assesment reports that had been issued before we started the project. That is, we have just combined our already developed scraping muscle with the Archive.org Wayback Machine as provided by Evan Sangaline, and we have of course worked around the inconsistencies of German bureaucracy.

After all, we hope that corona-atlas remains boring and that our world map remains green. Now it is finally time for quite some Wanderlust!

]]>
https://dieghernan.github.io/202203_Corona-timelapse/ posts COVID19Rdiscontinuedguest-authorleafletmapsprojectpython https://dieghernan.github.io/202203_Corona-timelapse/ Mon, 14 Mar 2022 00:00:00 +0100
Insets with ggplot2 and tmap - and mapsf! A map on a map

3 min.

This post is dedicated to Dominic Royé, AKA \@dr_xeo

A common challenge when creating maps is how to include an inset map on your visualization. An inset map is nothing more than a smaller map usually included on a corner that may provide additional context to the overall map. It is also useful for representing spatial units that may form part of a country but its geographical location would imply an imperfect visualization, or even to include small units that otherwise won’t be shown on the map.

I have already covered this using the base plot() function, but this time I would show how to produce these insets using the ggplot2 and the tmap packages. In short: use cowplot package.

Test case: Canary Island as an inset

On this example, I would create a map of Spain using mapSpain and creating an inset for the Canary Islands.

The “true” map of Spain is:

library(mapSpain)
library(sf)
library(ggplot2)
library(dplyr)

regions <- esp_get_ccaa(moveCAN = FALSE)

ggplot(regions) +
  geom_sf()

plot of chunk 20220303_truemap

I would use a different CRS for each part of Spain. In the case of mainland Spain I would use ETRS89 / UTM 30N (EPSG:25830) and for the Canary Islands I would use REGCAN95 / UTM 28N (EPSG:4083)

main <- regions %>%
  filter(ccaa.shortname.es != "Canarias") %>%
  st_transform(25830)

ggplot(main) +
  geom_sf()

plot of chunk 20220303_mainsub

island <- regions %>%
  filter(ccaa.shortname.es == "Canarias") %>%
  st_transform(4083)

ggplot(island) +
  geom_sf()

plot of chunk 20220303_mainsub

So that was easy! Just a couple of maps using ggplot2. Let’s start mixing and matching!

On ggplot2

We have already created two quick maps on ggplot2. Now, to produce our map with insets we would:

  1. Produce two plots: The main plot and the sub plot providing a minimal style. We would store them as ggplot2 objects.

  2. We would combine both objects with cowplot.

# Main plot
main_gg <- ggplot(main) +
  geom_sf() +
  theme_void() +
  theme(
    plot.background = element_rect(fill = "grey85", colour = NA),
    # Add a bit of margin on the bottom left
    # We would place the inset there
    plot.margin = margin(l = 80, b = 80)
  )

# Sub plot
sub_gg <- ggplot(island) +
  geom_sf() +
  theme_void() +
  # Add a border to the inset
  theme(
    panel.border = element_rect(fill = NA, colour = "black"),
    plot.background = element_rect(fill = "grey95")
  )

We have our objects in place, and now is when the magic happens! With cowplot we can combine both maps on a single one. You may need to play a bit with the parameters x, y hjust and vjust of the sub plot to improve the placement:

library(cowplot)

ggdraw() +
  draw_plot(main_gg) +
  draw_plot(sub_gg,
    height = 0.2,
    x = -0.25,
    y = 0.08
  )

plot of chunk 20220303_insetggplot

Note also that this approach is valid not only for maps, but for all type of plot produced by ggplot2, since this package is not specific for map objects:

# Combining non-spatial plots
library(palmerpenguins)

mass_flipper <- ggplot(
  data = penguins,
  aes(
    x = flipper_length_mm,
    y = body_mass_g
  )
) +
  geom_point(aes(
    color = species,
    shape = species
  ),
  size = 3,
  alpha = 0.8
  ) +
  theme_minimal() +
  scale_color_manual(values = c("darkorange", "purple", "cyan4"))

flipper_hist <- ggplot(data = penguins, aes(x = flipper_length_mm)) +
  geom_histogram(aes(fill = species),
    alpha = 0.5,
    position = "identity",
    show.legend = FALSE
  ) +
  scale_fill_manual(values = c("darkorange", "purple", "cyan4")) +
  theme_void() +
  theme(plot.background = element_rect(fill = "white"))


# Non-sense plot!
ggdraw() +
  draw_plot(mass_flipper) +
  draw_plot(flipper_hist,
    scale = 0.25,
    y = 0.3,
    x = -0.2
  )

plot of chunk 20220303_insetggplot_nonsense

On tmap

We can follow a similar approach on tmap. On versions v3.x.x (there is a new revamped version on development) we can use tmap_grob() to convert the tmap objects to the objects that cowplot can handle.

library(tmap)

main_tmap <- tm_shape(main) +
  tm_polygons() +
  tm_layout(
    inner.margins = c(.3, .3, 0, 0),
    frame = FALSE
  )


main_tmap <- tmap_grob(main_tmap)

sub_tmap <- tm_shape(island) +
  tm_polygons()

sub_tmap <- tmap_grob(sub_tmap)

Once that we have these new “grobs”, we can use the same approach than we applied to ggplot2 objects.

ggdraw() +
  draw_plot(main_tmap) +
  draw_plot(sub_tmap,
    height = 0.3,
    x = -0.2
  )

plot of chunk 20220303_insettmap

Update: On mapsf

Timotheé Giraud (AKA \@rgeomatic), the developer of mapsf, shared also how to create inset maps using that package:

library(mapsf)

mf_map(main)
mf_inset_on(island, pos = "bottomright", cex = .3)
mf_map(island)
box(lwd = .5)
mf_inset_off()

plot of chunk 20220303_insetmapsf

]]>
https://dieghernan.github.io/202203_insetmaps/ posts ggplot2insetmapSpainmapsmapsfr_bloggersrspatialrstatssftmap https://dieghernan.github.io/202203_insetmaps/ Thu, 03 Mar 2022 00:00:00 +0100
Beautiful Maps with R (IV): Fun with flags revisited Any picture as a basemap

2 min.

On 27 Jan. 2022 my package rasterpic was accepted on CRAN (Hooray!!). This package allows to geotag images, using an spatial object (from sf or terra) as a geographic reference.

I tweeted about that, and it seems to have a good feedback from the #rspatial community:

I received also an interesting reply to this from Hefin Ioan Rhys @HRJ21:

That remembers me to a previous post that I wrote when I added some new functions to the cartography package, now replaced by the mapsf package.

With rasterpic we have now an alternative tool for creating maps using images, and this quick post would show you how to do it.

I would replicate the Africa map presented on my previous plot, but this time I would use newer packages, as giscoR package, and the development version of ggspatial (not released yet), that adds support to SpatRaster object on ggplot2. The flags would be extracted from the GitHub repository https://github.com/hampusborgos/country-flags.


# Development version of ggspatial
# devtools::install_github("paleolimbot/ggspatial")
library(ggspatial)
library(ggplot2)
library(giscoR)
library(dplyr)
library(rasterpic)

# For country names
library(countrycode)

world <- gisco_get_countries(epsg = 3857)
africa <- gisco_get_countries(region = "Africa", epsg = 3857)

# Base map of Africa
plot <- ggplot(world) +
  geom_sf(fill = "grey90") +
  theme_minimal() +
  theme(panel.background = element_rect(fill = "lightblue"))

plot +
  # Zoom on Africa
  coord_sf(
    xlim = c(-2000000, 6000000),
    ylim = c(-4000000, 5000000)
  )

plot of chunk 20220128_africa

Now, let’s add the flags with a loop:


# We paste the ISO2 code to each african country
africa$iso2 <- countrycode(africa$ISO3_CODE, "iso3c", "iso2c")

# Get flags from repo - low quality to speed up the code
flagrepo <- "https://raw.githubusercontent.com/hjnilsson/country-flags/master/png250px/"

# Loop and add
for (iso in africa$iso2) {
  # Download pic and plot
  imgurl <- paste0(flagrepo, tolower(iso), ".png")
  tmpfile <- tempfile(fileext = ".png")
  download.file(imgurl, tmpfile, quiet = TRUE, mode = "wb")

  # Raster
  x <- africa %>% filter(iso2 == iso)
  x_rast <- rasterpic_img(x, tmpfile, crop = TRUE, mask = TRUE)
  plot <- plot + layer_spatial(x_rast)
}

plot +
  geom_sf(data = africa, fill = NA) +
  # Zoom on Africa
  coord_sf(
    xlim = c(-2000000, 6000000),
    ylim = c(-4000000, 5000000)
  )

plot of chunk 20220128_flag

]]>
https://dieghernan.github.io/202201_maps-flags/ posts beautiful_mapsflagsgiscoRmapsr_bloggersrasterpicrspatialrstatsterra https://dieghernan.github.io/202201_maps-flags/ Fri, 28 Jan 2022 00:00:00 +0100
Corona Atlas Interactive map of the international COVID-19 risk areas as designated by the German authorities.

1 min.

Project discontinued

corona-logo

Visit https://dieghernan.github.io/corona-atlas.de/

Interactive map of the international COVID-19 risk areas as designated by the German authorities.

The data is updated periodically from the website of the Robert Koch Institute.

Data scraping is performed on Python with scrapy. The scraper also uses pandas and pycountry.

For the prototype version, map visualization was created with R and generated a map via {rmarkdown} using {leaflet}, {giscoR} and some packages included on the tidyverse. For the deployment, map logic has moved to Javascript to escalate with multiple languages.

Read more on this post

]]>
https://dieghernan.github.io/projects/corona-atlas/ projects COVID19Rdiscontinuedleafletmapsprojectpython https://dieghernan.github.io/projects/corona-atlas/ Fri, 30 Apr 2021 00:00:00 +0200
spain-munic-bot A twitter bot written in R.

1 min.

Project discontinued

🤖 Twitter bot: random municipalities of Spain 🇪🇸 with {mapSpain} posted with {rtweet} via a GitHub Action

Hi! I am a bot 🤖 that tweets a random map of a Spanish municipality with its name, province, and autonomous community (and a inset map of Spain showing the region and the community). I run 🏃‍♀️ every 20 minutes.

I have a website!!

📦 R packages

Core packages used in the project are:

  • {mapSpain} for the location of the municipalities, base polygons and coordinates and imagery,
  • {osmdata} for the streets,
  • {tmap} for plotting,
  • {rtweet} for posting,

Other packages used are {sf}, {dplyr} and another common supporting packages.

]]>
https://dieghernan.github.io/projects/spain-munic-bot/ projects Rdiscontinuedmapsprojecttwitter https://dieghernan.github.io/projects/spain-munic-bot/ Fri, 29 Jan 2021 00:00:00 +0100
Leaflet-providersESP Plugin for Leaflet.js

1 min.

Leaflet-providersESP is a plugin for Leaflet that contains configurations for various tile layers provided by public organisms of Spain.

DOI

Demo

Full docs and examples on https://dieghernan.github.io/leaflet-providersESP/

This code would generate a leaflet map with a layer provided by Leaflet-providersESP.

<!DOCTYPE html>
<html>
<head>
	<title>Minimal page | leaflet-providersESP</title>
	<meta charset="utf-8" />
	<meta name="viewport" content="width=device-width, initial-scale=1.0">
	<!-- Load Leaflet -->
	<link rel="stylesheet" href="https://unpkg.com/[email protected]/dist/leaflet.css" />
	<script src="https://unpkg.com/[email protected]/dist/leaflet.js"></script>
	<!-- Install leaflet-providersESP -->
	<script src="https://cdn.jsdelivr.net/gh/dieghernan/leaflet-providersESP/dist/leaflet-providersESP.min.js"></script>
	<!-- Display map full page -->
	<style>
	html {
		height: 100%
	}
	body {
		height: 100%;
		margin: 0;
		padding: 0;
	}
	#mapid {
		height: 100%;
		width: 100%
	}
	</style>
</head>
<body>
	<!-- Create map -->
	<div id="mapid"></div>
	<!-- Puerta del Sol - IDErioja server -->
	<script>
	var mymap = L.map('mapid').setView([40.4166, -3.7038400], 18);
	L.tileLayer.providerESP('IDErioja').addTo(mymap);
	</script>
</body>

All providers

]]>
https://dieghernan.github.io/projects/leaflet-providersESP/ projects leafletmapsproject https://dieghernan.github.io/projects/leaflet-providersESP/ Mon, 26 Oct 2020 00:00:00 +0100
Head/Tails breaks on the classInt package. 11 min.

There are far more ordinary people (say, 80 percent) than extraordinary people (say, 20 percent); this is often characterized by the 80/20 principle, based on the observation made by the Italian economist Vilfredo Pareto in 1906 that 80% of land in Italy was owned by 20% of the population. A histogram of the data values for these phenomena would reveal a right-skewed or heavy-tailed distribution. How to map the data with the heavy-tailed distribution?

Jiang (2013)

Abstract

This vignette discusses the implementation of the “Head/tail breaks” style (Jiang (2013)) on the classIntervals function of the classInt package. A step-by-step example is presented in order to clarify the method. A case study using spData::afcon is also included, making use of other additional packages as sf.

Introduction

The Head/tail breaks, sometimes referred as ht-index (Jiang and Yin (2013)), is a classification scheme introduced by Jiang (2013) in order to find groupings or hierarchy for data with a heavy-tailed distribution.

Heavy-tailed distributions are heavily right skewed, with a minority of large values in the head and a majority of small values in the tail. This imbalance between the head and tail, or between many small values and a few large values, can be expressed as “far more small things than large things”.

Heavy tailed distributions are commonly characterized by a power law, a lognormal or an exponential function. Nature, society, finance (Vasicek (2002)) and our daily lives are full of rare and extreme events, which are termed “black swan events” (Taleb (2008)). This line of thinking provides a good reason to reverse our thinking by focusing on low-frequency events.

library(classInt)

# 1. Characterization of heavy-tail distributions----
set.seed(1234)
# Pareto distribution a=1 b=1.161 n=1000
sample_par <- 1 / (1 - runif(1000))^(1 / 1.161)
opar <- par(no.readonly = TRUE)
par(mar = c(2, 4, 3, 1), cex = 0.8)
plot(
  sort(sample_par, decreasing = TRUE),
  type = "l",
  ylab = "F(x)",
  xlab = "",
  main = "80/20 principle"
)
abline(
  h = quantile(sample_par, .8),
  lty = 2,
  col = "red3"
)
abline(
  v = 0.2 * length(sample_par),
  lty = 2,
  col = "darkblue"
)
legend(
  "topleft",
  legend = c("F(x): p80", "x: Top 20%"),
  col = c("red3", "darkblue"),
  lty = 2,
  cex = 0.8
)

hist(
  sample_par,
  n = 100,
  xlab = "",
  main = "Histogram",
  col = "grey50",
  border = NA,
  probability = TRUE
)
par(opar)

plot of chunk 20200405_charheavytailplot of chunk 20200405_charheavytail

Breaking method

The method itself consists on a four-step process performed recursively until a stopping condition is satisfied. Given a vector of values \(v = (a_1, a_2, ..., a_n)\) the process can be described as follows:

  1. On each iteration, compute \(\mu = \sum_{i=1}^{n} a_i \:\:\: \forall \: a_i \in v\).
  2. Break \(v\) into the \(tail\) and the \(head\): \(tail = \{ a_x \in v | a_x \lt \mu \}\) \(head = \{ a_x \in v | a_x \gt \mu \}\).
  3. Assess if the proportion of \(head\) over \(v\) is lower or equal than a given threshold: \(\frac{|head|}{|v|} \le thresold\)
  4. If 3 is TRUE, repeat 1 to 3 until the condition is FALSE or no more partitions are possible (i.e. \(head\) has less than two elements).

It is important to note that, at the beginning of a new iteration, \(v\) is replaced by \(head\). The underlying hypothesis is to create partitions until the head and the tail are balanced in terms of distribution.So the stopping criteria is satisfied when the last head and the last tail are evenly balanced.

In terms of threshold, Jiang, Liu, and Jia (2013) set 40% as a good approximation, meaning that if the \(head\) contains more than 40% of the observations the distribution is not considered heavy-tailed.

The final breaks are the vector of consecutive \(\mu\):

[breaks = (\mu_1, \mu_2, \mu_3, …, \mu_n )]

Step by step example

We reproduce here the pseudo-code1 as per Jiang (2019):

Recursive function Head/tail Breaks:
 Rank the input data from the largest to the smallest
 Break the data into the head and the tail around the mean;
 // the head for those above the mean
 // the tail for those below the mean
 While (head <= 40%):
 Head/tail Breaks (head);
End Function

A step-by-step example in R (for illustrative purposes) has been developed:

opar <- par(no.readonly = TRUE)
par(mar = c(2, 2, 3, 1), cex = 0.8)
var <- sample_par
thr <- .4
brks <- c(min(var), max(var)) # Initialise with min and max

sum_table <- data.frame(
  iter = 0,
  mu = NA,
  prop = NA,
  n_var = NA,
  n_head = NA
)
# Pars for chart
limchart <- brks
# Iteration
for (i in 1:10) {
  mu <- mean(var)
  brks <- sort(c(brks, mu))
  head <- var[var > mu]
  prop <- length(head) / length(var)
  stopit <- prop < thr & length(head) > 1
  sum_table <- rbind(
    sum_table,
    c(i, mu, prop, length(var), length(head))
  )
  hist(
    var,
    main = paste0("Iter ", i),
    breaks = 50,
    col = "grey50",
    border = NA,
    xlab = "",
    xlim = limchart
  )
  abline(v = mu, col = "red3", lty = 2)
  ylabel <- max(hist(var, breaks = 50, plot = FALSE)$counts)
  labelplot <- paste0("PropHead: ", round(prop * 100, 2), "%")
  text(
    x = mu,
    y = ylabel,
    labels = labelplot,
    cex = 0.8,
    pos = 4
  )
  legend(
    "right",
    legend = paste0("mu", i),
    col = c("red3"),
    lty = 2,
    cex = 0.8
  )
  if (isFALSE(stopit)) {
    break
  }
  var <- head
}
par(opar)

plot of chunk 20200405_stepbystepplot of chunk 20200405_stepbystepplot of chunk 20200405_stepbystepplot of chunk 20200405_stepbystep

As it can be seen, in each iteration the resulting head gradually loses the high-tail property, until the stopping condition is met.

iter mu prop n_var n_head
1 5.6755 14.5% 1000 145
2 27.2369 21.38% 145 31
3 85.1766 19.35% 31 6
4 264.7126 50% 6 3

The resulting breaks are then defined as breaks = c(min(var), mu1, mu2, ..., mu_n, max(var)).

Implementation on classInt package

The implementation in the classIntervals function should replicate the results:

ht_sample_par <- classIntervals(sample_par, style = "headtails")
brks == ht_sample_par$brks
## [1] TRUE TRUE TRUE TRUE TRUE TRUE

As stated in Jiang (2013), the number of breaks is naturally determined, however the thr parameter could help to adjust the final number. A lower value on thr would provide less breaks while a larger thr would increase the number, if the underlying distribution follows the “far more small things than large things” principle.

opar <- par(no.readonly = TRUE)
par(mar = c(2, 2, 2, 1), cex = 0.8)

pal1 <- c("wheat1", "wheat2", "red3")
# Minimum: single break
print(paste("number of breaks", length(classIntervals(sample_par, style = "headtails", thr = 0)$brks - 1)))
plot(
  classIntervals(sample_par, style = "headtails", thr = 0),
  pal = pal1,
  main = "thr = 0"
)

# Two breaks
print(paste("number of breaks", length(classIntervals(sample_par, style = "headtails", thr = 0.2)$brks - 1)))
plot(
  classIntervals(sample_par, style = "headtails", thr = 0.2),
  pal = pal1,
  main = "thr = 0.2"
)

# Default breaks: 0.4
print(paste("number of breaks", length(classIntervals(sample_par, style = "headtails")$brks - 1)))
plot(classIntervals(sample_par, style = "headtails"),
  pal = pal1,
  main = "thr = Default"
)

# Maximum breaks
print(paste("number of breaks", length(classIntervals(sample_par, style = "headtails", thr = 1)$brks - 1)))
plot(
  classIntervals(sample_par, style = "headtails", thr = 1),
  pal = pal1,
  main = "thr = 1"
)
par(opar)

plot of chunk 20200405_examplesimpplot of chunk 20200405_examplesimpplot of chunk 20200405_examplesimpplot of chunk 20200405_examplesimp

The method always returns at least one break, corresponding to mean(var).

Case study

Jiang (2013) states that "the new classification scheme is more natural than the natural breaks in finding the groupings or hierarchy for data with a heavy-tailed distribution." (p. 482), referring to Jenks’ natural breaks method. In this case study we would compare headtails vs. fisher, that is the alias for the Fisher-Jenks algorithm and it is always preferred to the jenks style (see ?classIntervals). For this example we will use the afcon dataset from spData package, plus some additional spatial information in order to create the data visualization.

library(spData)
data(afcon, package = "spData")

Let’s have a look to the Top 10 values and the distribution of the variable totcon (index of total conflict 1966-78):

# Top10
knitr::kable(head(afcon[order(afcon$totcon, decreasing = TRUE), c("name", "totcon")], 10))

opar <- par(no.readonly = TRUE)
par(mar = c(4, 4, 3, 1), cex = 0.8)
hist(afcon$totcon,
  n = 20,
  main = "Histogram",
  xlab = "totcon",
  col = "grey50",
  border = NA,
)
plot(
  density(afcon$totcon),
  main = "Distribution",
  xlab = "totcon",
)
par(opar)

plot of chunk 20200405_summspdataplot of chunk 20200405_summspdata

The data shows that EG and SU data present a clear hierarchy over the rest of values. As per the histogram, we can confirm a heavy-tailed distribution and therefore the “far more small things than large things” principle.

As a testing proof, on top of headtails and fisher we would use also quantile to have a broader view on the different breaking styles. As quantile is a position-based metric, it doesn’t account for the magnitude of F(x) (hierarchy), so the breaks are solely defined by the position of x on the distribution.

Applying the three aforementioned methods to break the data:

brks_ht <- classIntervals(afcon$totcon, style = "headtails")
print(brks_ht)
# Same number of classes for "fisher"
nclass <- length(brks_ht$brks) - 1
brks_fisher <- classIntervals(afcon$totcon,
  style = "fisher",
  n = nclass
)
print(brks_fisher)

brks_quantile <- classIntervals(afcon$totcon,
  style = "quantile",
  n = nclass
)
print(brks_quantile)

pal1 <- c("wheat1", "wheat2", "red3")
opar <- par(no.readonly = TRUE)
par(mar = c(2, 2, 2, 1), cex = 0.8)
plot(brks_ht, pal = pal1, main = "headtails")
plot(brks_fisher, pal = pal1, main = "fisher")
plot(brks_quantile, pal = pal1, main = "quantile")
par(opar)

plot of chunk 20200405_breaksampleplot of chunk 20200405_breaksampleplot of chunk 20200405_breaksample

It is observed that the top three classes of headtails enclose 5 observations, whereas fisher includes 13 observations. In terms of classification, headtails breaks focuses more on extreme values.

The next plot compares a continuous distribution of totcon re-escalated to a range of [1,nclass] versus the distribution across breaks for each style. The continuous distribution has been offset by -0.5 in order to align the continuous and the discrete distributions.

# Helper function to reescale values
help_reescale <- function(x, min = 1, max = 10) {
  r <- (x - min(x)) / (max(x) - min(x))
  r <- r * (max - min) + min
  return(r)
}
afcon$ecdf_class <- help_reescale(afcon$totcon,
  min = 1 - 0.5,
  max = nclass - 0.5
)
afcon$ht_breaks <- cut(afcon$totcon,
  brks_ht$brks,
  labels = FALSE,
  include.lowest = TRUE
)

afcon$fisher_breaks <- cut(afcon$totcon,
  brks_fisher$brks,
  labels = FALSE,
  include.lowest = TRUE
)

afcon$quantile_break <- cut(afcon$totcon,
  brks_quantile$brks,
  labels = FALSE,
  include.lowest = TRUE
)

opar <- par(no.readonly = TRUE)
par(mar = c(4, 4, 1, 1), cex = 0.8)
plot(
  density(afcon$ecdf_class),
  ylim = c(0, 0.8),
  lwd = 2,
  main = "",
  xlab = "class"
)
lines(density(afcon$ht_breaks), col = "darkblue", lty = 2)
lines(density(afcon$fisher_breaks), col = "limegreen", lty = 2)
lines(density(afcon$quantile_break),
  col = "red3",
  lty = 2
)
legend("topright",
  legend = c(
    "Continuous", "headtails",
    "fisher", "quantile"
  ),
  col = c("black", "darkblue", "limegreen", "red3"),
  lwd = c(2, 1, 1, 1),
  lty = c(1, 2, 2, 2),
  cex = 0.8
)
par(opar)

plot of chunk 20200405_benchmarkbreaks

It can be observed that the distribution of headtails breaks is also heavy-tailed, and closer to the original distribution. On the other extreme, “quantile” provides a quasi-uniform distribution, ignoring the totcon hierarchy

In terms of data visualization, we compare here the final map using the techniques mentioned above. On this plotting exercise, a choropleth map would be created.

Additionally, a high-granularity choropleth map is created with a greater number of classes, in order to compare and contrast the actual grouping options against a more granular approach.

library(sf)
library(giscoR)
library(cartography)

opar <- par(no.readonly = TRUE)

par(
  mfrow = c(2, 2),
  mar = c(1, 1, 1, 1),
  bg = "white"
)
africa <- gisco_get_countries(resolution = 60, region = "Africa", epsg = 3857)

afcon.sf <- st_as_sf(afcon, crs = 4326, coords = c("x", "y"))
afcon.sf <- st_transform(afcon.sf, st_crs(africa))
# afcon.sf <- st_join(africa[, "admin"], afcon.sf)
afcon.sf <- afcon.sf[order(afcon.sf$totcon), ]



# High granularity map
plot(st_geometry(africa), col = "grey80", border = NA)
propSymbolsLayer(
  afcon.sf,
  var = "totcon",
  inches = 0.2,
  col = adjustcolor("grey10", alpha.f = 0.5),
  border = NA
)
title(main = "High granularity map")

# Quantile

pal <- hcl.colors(5, palette = "inferno", alpha = 0.6)
plot(st_geometry(africa), col = "grey80", border = NA)
propSymbolsTypoLayer(
  afcon.sf,
  var = "totcon",
  inches = 0.2,
  col = pal,
  border = NA,
  legend.var.pos = "n",
  legend.var2.pos = "bottomleft",
  var2 = "quantile_break"
)
title(main = "Quantile")


# Fisher
plot(st_geometry(africa), col = "grey80", border = NA)
propSymbolsTypoLayer(
  afcon.sf,
  var = "totcon",
  inches = 0.2,
  col = pal,
  border = NA,
  legend.var.pos = "n",
  legend.var2.pos = "bottomleft",
  var2 = "fisher_breaks"
)
title(main = "Fisher")

# Head Tails
plot(st_geometry(africa), col = "grey80", border = NA)
propSymbolsTypoLayer(
  afcon.sf,
  var = "totcon",
  inches = 0.2,
  col = pal,
  border = NA,
  legend.var.pos = "n",
  legend.var2.pos = "bottomleft",
  var2 = "ht_breaks"
)
title(main = "Head Tails")
par(opar)

plot of chunk 20200405_finalplot

As per the results, headtails seems to provide a better understanding of the most extreme values when the result is compared against the high-granularity plot. The quantile style, as expected, just provides a clustering without taking into account the real hierarchy. The fisher plot is in-between of these two interpretations.

It is also important to note that headtails and fisher reveal different information that can be useful depending of the context. While headtails highlights the outliers, it fails on providing a good clustering on the tail, while fisher seems to reflect better these patterns. This can be observed on the values of Western Africa and the Niger River Basin, where headtails doesn’t highlight any special cluster of conflicts, fisher suggests a potential cluster, aligned with the high-granularity plot. This can be confirmed on the histogram generated previously, where a concentration of totcon around 1,000 is visible.

References

Jiang, Bin. 2013. "Head/Tail Breaks: A New Classification Scheme for Data with a Heavy-Tailed Distribution." The Professional Geographer 65 (3): 482–94. DOI.

———. 2019. "A Recursive Definition of Goodness of Space for Bridging the Concepts of Space and Place for Sustainability." Sustainability 11 (15): 4091. DOI.

Jiang, Bin, Xintao Liu, and Tao Jia. 2013. "Scaling of Geographic Space as a Universal Rule for Map Generalization." Annals of the Association of American Geographers 103 (4): 844–55. DOI.

Jiang, Bin, and Junjun Yin. 2013. "Ht-Index for Quantifying the Fractal or Scaling Structure of Geographic Features." Annals of the Association of American Geographers 104 (3): 530–40. DOI.

Taleb, Nassim Nicholas. 2008. The Black Swan: The Impact of the Highly Improbable. 1st ed. London: Random House.

Vasicek, Oldrich. 2002. "Loan Portfolio Value." Risk, December, 160–62.

  1. The method implemented on classInt corresponds to head/tails 1.0 as named on this article. 

]]>
https://dieghernan.github.io/202004_headtails/ posts classIntr_bloggersr_packagerspatialrstatsclassInt package. ]]> https://dieghernan.github.io/202004_headtails/ Sun, 05 Apr 2020 00:00:00 +0200
COVID19 Microsite Tracking the outbreak in Spain by region

1 min.

Project discontinued

Visit the microsite with maps and official data on the impact of COVID19 in Spain.

https://dieghernan.github.io/COVID19 [In Spanish]

FallecidosEvo

Overall deceases in Spain by COVID19 - Evolution

]]>
https://dieghernan.github.io/projects/COVID19 projects COVID19Rcartographydiscontinuedmapsprojectsf https://dieghernan.github.io/projects/COVID19 Sat, 04 Apr 2020 00:00:00 +0200
New features on cartography package Vignette of the package expansion

6 min.

Introduction

The aim of this document is to describe the new features added to cartography on version 2.4.0 by dieghernan and already available on CRAN.

Those new features are:

  • hatchedLayer and legendHatched functions.
  • pngLayer and getPngLayer functions.
  • wordcloudLayer function.

These functions don’t handle sp objects on purpose, favoring sf instead.

Installation

#install.packages("cartography")
library(cartography)
packageVersion("cartography")
## [1] '3.0.0'

Hatched Map

Version of typology/choropleth maps using a hatched filling. This is particularly useful for those maps that needs to be printed on black and white, as academic papers. These maps also are useful for representing overlapping dimensions.

Example 1

library(sf)
library(jsonlite)
library(dplyr)

#Shape
cntries = st_read("https://ec.europa.eu/eurostat/cache/GISCO/distribution/v2/countries/geojson/CNTR_RG_20M_2016_3035.geojson",
                  stringsAsFactors = FALSE)
## Reading layer `CNTR_RG_20M_2016_3035' from data source `https://ec.europa.eu/eurostat/cache/GISCO/distribution/v2/countries/geojson/CNTR_RG_20M_2016_3035.geojson' using driver `GeoJSON'
## Simple feature collection with 257 features and 6 fields
## Geometry type: MULTIPOLYGON
## Dimension:     XY
## Bounding box:  xmin: -7142317 ymin: -9160665 xmax: 16932290 ymax: 15428010
## Projected CRS: ETRS89-extended / LAEA Europe

# Include trade blocks
df <- fromJSON("https://raw.githubusercontent.com/dieghernan/Country-Codes-and-International-Organizations/master/outputs/Countrycodesfull.json")
ISO_memcol <- function(df,
                       orgtosearch) {
  ind <- match(orgtosearch, unlist(df[1, "org_id"]))
  or <- lapply(1:nrow(df), function(x)
    unlist(df[x, "org_member"])[ind])
  or <- data.frame(matrix(unlist(or)), stringsAsFactors = F)
  names(or) <- orgtosearch
  df2 <- as.data.frame(cbind(df, or, stringsAsFactors = F))
  return(df2)
}
df <- ISO_memcol(df, "EU")
df <- ISO_memcol(df, "EFTA")
df <- ISO_memcol(df, "EuroArea")


cntries = merge(cntries,
                df,
                by.x = "ISO3_CODE",
                by.y = "ISO_3166_3",
                all.x = TRUE)

library(cartography)


#Limits EU
#Plot base map
plot(
  st_geometry(cntries),
  xlim = c(2200000, 7150000),
  ylim = c(1380000, 5500000),
)
plot(st_geometry(cntries), add = TRUE)
plot(st_geometry(cntries[!is.na(cntries$EU),]), col = "grey60", add = TRUE)
plot(st_geometry(cntries[!is.na(cntries$EFTA),]), col = "black", add = TRUE)
#Add hatching
hatchedLayer(
  cntries[!is.na(cntries$EuroArea),],
  pattern = "right2left",
  col = "white",
  density = 3,
  add = TRUE
)
legendTypo(
  "topright",
  title.txt = "",
  categ = c("EU", "EFTA"),
  col = c("grey60", "black"),
  nodata = FALSE
)

legendHatched(
  "right",
  title.txt = "Within EU",
  categ = "Euro\nArea",
  patterns = "right2left",
  frame = TRUE
)
layoutLayer(
  title = "European Trade Blocks",
  theme = "grey.pal",
  sources = "© EuroGeographics for the administrative boundaries.",
  author =  paste0("cartography ", packageVersion("cartography")),
  scale = 500,
  frame = TRUE
)

plot of chunk 20200217_hatched-min

Example 2

library(sf)
library(cartography)

# Plot World
plot(
  cntries$geometry,
  col = "white",
  xlim = c(2200000, 7150000),
  ylim = c(1380000, 5500000),
)
#Add layers for non european area - left2right
NOEUR = subset(cntries, CONTINENT.EN != "Europe" | is.na(CONTINENT.EN))
hatchedLayer(
  NOEUR,
  pattern = "left2right",
  add = TRUE,
  #Basic usage
  density = 2,
  #Densify default grid
  lwd = 1.2,
  lty = 3,
  col = "grey50"
)            #Formatting

#Extract Europe Regions
EUR = subset(cntries, CONTINENT.EN == "Europe" & SUBREGION.EN != "Western Asia")[, "SUBREGION.EN"]
levels <- sort(unique(EUR$SUBREGION.EN))


#Plot Regions
# First element with zigzag
hatchedLayer(EUR[EUR$SUBREGION.EN == levels[1], ],
             add = TRUE,
             density = 4,
             pattern = "zigzag")

#dot with parms
hatchedLayer(
  EUR[EUR$SUBREGION.EN == levels[2], ],
  add = TRUE,
  pattern = "dot",
  pch = 4,
  cex = 0.5,
  density = 3.5)

#vertical
hatchedLayer(EUR[EUR$SUBREGION.EN == levels[3], ],
             add = TRUE,
             pattern = "vertical",
             density = 3)

#another dot
hatchedLayer(EUR[EUR$SUBREGION.EN == levels[4], ],
             add = TRUE,
             pattern = "dot",
             pch= 15,
             density = 3.5)

#create legend
legendHatched(
  pos = "topright",
  title.txt = "",
  title.cex = 0.1,
  categ = c(levels, "Others"),
  patterns = c("zigzag",
               "dot",
               "vertical",
               "dot",
               "left2right"),
  pch = c(4,15),
  lty = c(1, 1, 3),
  col = c(rep("black", length(levels)), "grey50"),
  frame = TRUE
)

layoutLayer(
  title = "Regions of Europe (United Nations)",
  theme = "grey.pal",
  sources = "© EuroGeographics for the administrative boundaries.",
  author =  paste0("cartography ", packageVersion("cartography")),
  scale = 500,
  frame = TRUE
) 

plot of chunk 20200217_hatched

legendHatched honors the order on the parameters. In this case, two dot patterns are presents, so pch = c(4,15) takes care of that. Note that three line-type patterns are also plotted, as and in the previous case, lty = c(1, 1, 3) respect that order.

Example 3

hatchedLayer also could be useful for plotting several dimensions on the same map, in combination with another functions of the package.


library(sf)
library(cartography)



#Warsaw Pact - roughly mapped via UN Region
wp <- subset(cntries, SUBREGION.EN == "Eastern Europe")

#European Union - after Brexit
eu <- subset(cntries, !is.na(EU))

#Euro Area
ea <- subset(cntries, !is.na(EuroArea))

#Flags for plotting
flag = ifelse(cntries$ISO3_CODE %in% eu$ISO3_CODE,
              "European Union",
              cntries$CONTINENT.EN)


flag = ifelse(!flag %in% c("Europe", "European Union"), "Other",
              flag)
cntries$flag <- flag



#Plot lims
plot(
  st_geometry(cntries),
  xlim = c(2200000, 7150000),
  ylim = c(1380000, 5500000)
)


#Combine with typoLayer
typoLayer(
  x = cntries,
  var = "flag",
  legend.pos = "n",
  legend.values.order = c("European Union",
                          "Europe",
                          "Other"),
  col = c("#A6BDDB",
          "#FEFEE9",
          "#E0E0E0"),
  add = TRUE
)

#hatching
hatchedLayer(
  x = ea,
  pattern = "left2right",
  cellsize = 100000,
  add = TRUE
)

hatchedLayer(
  x = wp,
  pattern = "vertical",
  col="red",
  cellsize = 75000,
  add = TRUE
)

#Legend
legendHatched(
  pos = "right",
  title.txt = "Economic Region",
  categ = c(
    "Euro Area",
    "Former \nWarsaw Pact",
    "European Union",
    "Europe",
    "Other"
  ),
  patterns = c("left2right", "vertical"),
  col = c("black", "red", "#A6BDDB",
          "#FEFEE9",
          "#E0E0E0"),
  ptrn.bg = c("white", "white", "#A6BDDB",
              "#FEFEE9",
              "#E0E0E0"),
  frame = TRUE
)

layoutLayer(
  title = "European Countries - Blocks",
  theme = "blue.pal",
  sources = "© EuroGeographics for the administrative boundaries.",
  author =  paste0("cartography ", packageVersion("cartography")),
  scale = 500,
  frame = TRUE
)

plot of chunk 20200217_hatched-adv

png Layer

This new capability geotags a .png file, effectively converting the image into a tile. This allows the user to create visual maps by masking an image to the shape of a POLYGON/MULTIPOLYGON.

For high-quality png maps, it is recommended to plot your map on a .svg device.

Example 1

library(sf)
library(cartography)

cntries2 = st_read("https://ec.europa.eu/eurostat/cache/GISCO/distribution/v2/countries/geojson/CNTR_RG_20M_2016_3857.geojson",
                  stringsAsFactors = FALSE)
## Reading layer `CNTR_RG_20M_2016_3857' from data source `https://ec.europa.eu/eurostat/cache/GISCO/distribution/v2/countries/geojson/CNTR_RG_20M_2016_3857.geojson' using driver `GeoJSON'
## Simple feature collection with 257 features and 6 fields
## Geometry type: MULTIPOLYGON
## Dimension:     XY
## Bounding box:  xmin: -20037510 ymin: -30240970 xmax: 20037510 ymax: 18446790
## Projected CRS: WGS 84 / Pseudo-Mercator

cntries2 = merge(cntries2,
                df,
                by.x = "ISO3_CODE",
                by.y = "ISO_3166_3",
                all.x = TRUE)

africa=subset(cntries2, CONTINENT.EN =="Africa") %>%
  arrange(desc(area_km2))


#Get bigger countries

plot(st_geometry(africa[1:20,]), bg="lightblue")


plot(st_geometry(cntries2), col="grey90", add=TRUE)

#Iterate for Africa
#Get flags from repo - low quality to speed up the vignette
flagrepo = "https://raw.githubusercontent.com/hjnilsson/country-flags/master/png250px/"


for (i in 1:nrow(africa)) {
  cntry <- as.character(st_drop_geometry(africa[i, "ISO_3166_2"]))
  a = getPngLayer(africa[i,],
                  paste(flagrepo, tolower(cntry), ".png", sep = ""))
  pngLayer(a, add = TRUE)
}
#Add borders
plot(st_geometry(cntries2), add = TRUE, col = NA, lwd=0.4)

layoutLayer(
  title = "Flags of Africa",
  theme = "green.pal",
  sources = "© EuroGeographics for the administrative boundaries.",
  author =  paste0("cartography ", packageVersion("cartography")),
  scale = 500,
  frame = TRUE
) 

plot of chunk 20200217_pnglayer

Example 2


library(sf)
library(cartography)

box <- c(xmin=2200000, xmax=7150000,ymin=1380000, ymax=5500000)
nuts0 <-  st_crop(cntries, box)

UK = nuts0 %>% filter(id == "UK")
url = "https://upload.wikimedia.org/wikipedia/commons/thumb/b/b7/Flag_of_Europe.svg/800px-Flag_of_Europe.svg.png"


EU=getPngLayer(nuts0,url, mask=FALSE)
pngLayer(EU, alpha=100)
urluk=flagrepo = "https://raw.githubusercontent.com/hjnilsson/country-flags/master/png250px/gb.png"

UKpng=getPngLayer(UK,urluk)

pngLayer(UKpng, add=TRUE)

plot of chunk 20200217_png-adv

wordcloudLayer

A word cloud (or tag cloud) is a visual representation of text data. On a mapping context, this representation is useful for including several information at a glance.

Wordcloud layers fitted into a map shape provide a good trade-off between physical location, scale and labels. Size and colors of the words are also based on the frequency of the factor to be plotted, highlighting the most frequent terms over the rest.

Example 1


eu$dens=eu$pop/eu$area_km2

wordcloudLayer(eu,txt="NAME.EN",freq="dens",
               fittopol = TRUE,
               nclass=7,
               cex.maxmin = c(3,1) )
br=getBreaks(eu$dens, nclass=7)

legendChoro(breaks=br, title.txt="Pop Density",col=carto.pal("blue.pal",7))
layoutLayer(
  title = "Population Density in Europe",
  theme = "blue.pal",
  sources = "© EuroGeographics for the administrative boundaries.",
  author =  paste0("cartography ", packageVersion("cartography")),
  scale = 500,
  frame = TRUE
) 

plot of chunk 20200217_wordcloud1

Example 2


# Genres from MB--
#Import genres
collected = read.csv("./assets/data/US_MB.csv", stringsAsFactors = F)
collected=collected[-1,] %>% arrange(desc(n))

set.seed(1234)

library(rnaturalearth)
shape = ne_countries(country="united states of america", returnclass = "sf") %>% st_transform(3857)
#Get main polygon
shape = shape %>% st_union %>% st_cast("POLYGON")
areas = st_area(shape)
shape = shape[areas == max(areas)]

set.seed(1234)
points = 100
#Sample regular
points = st_sample(shape, points + 5, type = "regular")

#Center points

centr=st_centroid(shape, of_largest_polygon = TRUE)
f=st_distance(points,centr)
dfpoints=st_sf(dist=f,
               geometry=points) %>%
  arrange((dist))


#Create df
points=st_sf(collected[1:100,],
             geometry=st_geometry(dfpoints)[1:100])

plot(shape, col="grey95", border = NA)
wordcloudLayer(points,
               txt="genre",
               freq="n",
               cex.maxmin =  c(3, 0.6),
               col=c("#ba478f", "#eb743b"),
               nclass=7,
               add=TRUE)
layoutLayer(title="Most frequent genres on US",
            sources="Musicbrainz",
              author =  paste0("cartography ", packageVersion("cartography")),
            theme="orange.pal")

plot of chunk 20200217_wordcloud2

]]>
https://dieghernan.github.io/202002_cartography1/ posts beautiful_mapscartographymapsr_bloggersr_packagerspatialrstatssf https://dieghernan.github.io/202002_cartography1/ Mon, 17 Feb 2020 00:00:00 +0100
We’ll miss you, UK Brexit and the consequences

1 min.

This is just a super-quick post regarding Brexit. Leaving apart economical, political and social considerations, there is another consequence, now we are one less in the EU.

We will miss you, UK, we wish you the best.

library(cartography)
library(sf)
library(giscoR)


#EU
eu <- giscoR::gisco_countrycode %>% filter(eu)

# Download maps
NUTS1 <- gisco_get_nuts(epsg = 3035, country = eu$ISO3_CODE, nuts_level = 1)


UK <- gisco_get_countries(country =  "GBR", epsg = 3035)

noplot = c("FRY", "ES7", "PT2", "PT3")
NUTS1_Clean = NUTS1 %>% subset(!NUTS_ID %in% noplot) %>%
  group_by(CNTR_CODE) %>% summarise(a=dplyr::n())

# Flag image
url = "https://upload.wikimedia.org/wikipedia/commons/thumb/b/b7/Flag_of_Europe.svg/800px-Flag_of_Europe.svg.png"

#Mask UK
flagcut = getPngLayer(NUTS1_Clean,
                       url)
# Full extent
flag = getPngLayer(NUTS1_Clean,
                       url, mask = FALSE)


par(mar = c(0, 0, 0, 0))
tilesLayer(flag, alpha = 150)
tilesLayer(flagcut, bgalpha = 0, add = T)
plot(st_geometry(UK),
     col = "white",
     border = NA,
     add = TRUE)

plot of chunk 20200602_imgpost

]]>
https://dieghernan.github.io/202002_Brexit/ posts flagsgiscoRmapsrspatialrstats https://dieghernan.github.io/202002_Brexit/ Thu, 06 Feb 2020 00:00:00 +0100
Beautiful Maps with R (III): Patterns and hatched maps A solution for b/w and academic maps.

2 min.

Updated 17 february 2020: All these pieces of work are already available on cartography >v.2.4.0 on functions hatchedLayer and legendHatched. Just install it via install.packages("cartography"). A dedicated blog post with examples on this link.

On this post I would show how to produce different filling patterns that could be added over your shapefiles with the cartography package.

Required R packages

library(sf)
library(dplyr)
library(giscoR)
library(cartography)
DE <- gisco_get_countries(country = "Germany", epsg = 3857)

Let’s see how it works.

par(
  mfrow = c(3, 4),
  mar = c(1, 1, 1, 1),
  cex = 0.5
)
hatchedLayer(DE, "dot")
title("dot")
hatchedLayer(DE, "text", txt = "Y")
title("text")
hatchedLayer(DE, "diamond", density = 0.5)
title("diamond")
hatchedLayer(DE, "grid", lwd = 1.5)
title("grid")
hatchedLayer(DE, "hexagon", col = "blue")
title("hexagon")
hatchedLayer(DE, "horizontal", lty = 5)
title("horizontal")
hatchedLayer(DE, "vertical")
title("vertical")
hatchedLayer(DE, "left2right")
title("left2right")
hatchedLayer(DE, "right2left")
title("right2left")
hatchedLayer(DE, "zigzag")
title("zigzag")
hatchedLayer(DE, "circle")
title("circle")

plot of chunk 20191212_showfun

Let’s play a little bit more with some of the additional features of the function:

par(mar = c(1, 1, 1, 1), mfrow = c(2, 3))
plot(st_geometry(DE))
hatchedLayer(
  DE,
  "dot",
  pch = 10,
  density = 0.5,
  cex = 2,
  col = "darkblue",
  add = T
)
plot(st_geometry(DE))
hatchedLayer(
  DE,
  "dot",
  pch = 21,
  col = "red",
  bg = "green",
  cex = 1.25,
  add = T
)
plot(st_geometry(DE), col = "grey")
hatchedLayer(
  DE,
  "text",
  txt = "DE",
  density = 1.1,
  col = "white",
  add = T
)
plot(st_geometry(DE), col = "blue")
hatchedLayer(
  DE,
  "horizontal",
  lty = 3,
  cellsize = 150 * 1000,
  add = T
)
hatchedLayer(DE, "zigzag", lwd = 2, col = "red")
plot(st_geometry(DE), border = "orange", lwd = 2)
hatchedLayer(DE,
  "left2right",
  density = 2,
  col = "orange",
  add = T
)

plot of chunk 20191212_playing

Adding legends: the legendHatched function

As a complementary function, there is also the legendHatched. Main parameters are:

  • pos, title.txt, title.cex, values.cex,categ, cex and frame: See ?cartography::legendTypo.
  • patterns: vector of patterns to be created for each element on categ.
  • ptrn.bg: Background of the legend box for each categ.
  • ptrn.text: Text to be used for each categ="text", as a single value or a vector.
  • dot.cex: cex of each categ="dot", as a single value or a vector.
  • text.cex: text size of each categ="text", as a single value or a vector.
  • As in the case of the hatchedLayerfunction, different graphical parameters can be passed (lty, lwd, pch, bg on points).

Note that is also possible to create solid legends, by setting col and ptrn.bg to the same color. Parameters would respect the order of the categ variable.

par(mar = c(0, 0, 0, 0), mfrow = c(1, 1))
plot(st_geometry(DE)) # Null geometry
legendHatched(
  title.txt = "Example 1",
  categ = c("a", "b"),
  patterns = "dot",
  pch = c(16, 23),
  frame = T
)
legendHatched(
  pos = "left",
  title.txt = "Example 2",
  categ = c("c", "d", "other text"),
  patterns = c("text", "zigzag"),
  ptrn.text = c("s", "pp"),
  ptrn.bg = "grey80",
  col = c("red", "blue")
)

legendHatched(
  pos = "topright",
  title.txt = "Example 3",
  categ = c("e", "f", "solid"),
  patterns = c("circle", "left2right"),
  ptrn.bg = c("orange", "yellow", "green"),
  col = c("white", "white", "green"),
  lty = c(2, 4),
  lwd = c(1, 3)
)


legendHatched(
  pos = "bottomright",
  title.txt = "Example 4",
  values.cex = 1.2,
  categ = c("h", "i", "j", "k"),
  patterns = c("grid", "diamond", "horizontal", "dot"),
  cex = 2,
  pch = 22,
  col = "white",
  ptrn.bg = "black",
  bg = "pink"
)

plot of chunk 20191212_playinglegend

I hope that you find this functions useful. Enjoy and nice mapping!

plot of chunk 20191212_imgpost

]]>
https://dieghernan.github.io/201912_Beautiful3/ posts beautiful_mapscartographyfunctiongiscoRmapsr_bloggersr_packagerspatialrstatssf https://dieghernan.github.io/201912_Beautiful3/ Thu, 12 Dec 2019 00:00:00 +0100
Quick R: Inset maps An alternative using plot()

3 min.

How to place an inset map in R? There are many solutions out there using the ggplot2 package (see Drawing beautiful maps programmatically with R, sf and ggplot2 by Mel Moreno and Mathieu Basille). However, I like the old reliable plot function, so the question is: is there another way?

There is. I found inspiration here and I just applied it to a map.

Required R packages

library(sf)
library(dplyr)
library(rnaturalearth)

Mimicking Moreno & Basille

I would present here an alternative version of Drawing beautiful maps programmatically with R, sf and ggplot2, so the bulk of the detail can be found there. I would focus only in the plot()side.

world <- ne_countries(scale = "medium", returnclass = "sf")
USA <- subset(world, admin == "United States of America")

# Plot mainland USA----
par(mar = c(0, 0, 0, 0))
plot(
  st_geometry(world %>%
    st_transform(2163)),
  xlim = c(-2500000, 2500000),
  ylim = c(-2300000, 730000),
  col = "#F6E1B9",
  border = "#646464",
  bg = "#C6ECFF"
)
plot(
  st_geometry(USA) %>% st_transform(2163),
  col = "#FEFEE9",
  border = "black",
  add = T
)

plot of chunk 20191108_main

# Plot Alaska----
plot(
  st_geometry(world %>%
    st_transform(3467)),
  xlim = c(-2400000, 1600000),
  ylim = c(200000, 2500000),
  col = "#F6E1B9",
  border = "#646464",
  bg = "#C6ECFF"
)
plot(
  st_geometry(USA) %>% st_transform(3467),
  col = "#FEFEE9",
  border = "black",
  add = T
)

plot of chunk 20191108_Alaska

# Plot Hawaii----
plot(
  st_geometry(world %>%
    st_transform(4135)),
  xlim = c(-161, -154),
  ylim = c(18, 23),
  col = "#F6E1B9",
  border = "#646464",
  bg = "#C6ECFF"
)
plot(
  st_geometry(USA) %>% st_transform(4135),
  col = "#FEFEE9",
  border = "black",
  add = T
)

plot of chunk 20191108_Hawaii

Insetting

From now on, I just focus on the inset part, using the fig() option on par(). Quoting statmethods:

(…) think of the full graph area as going from (0,0) in the lower left corner to (1,1) in the upper right corner. The format of the fig= parameter is a numerical vector of the form c(x1, x2, y1, y2)(…) fig= starts a new plot, so to add to an existing plot use new=TRUE.

So being x1 and y1 the starting points and x2, y2 the final points, we just can set up those parameters and adjust the final placement of the insets. Additionally I added a box around the insets using bbox(). I didn’t mimic Moreno & Basille here and I just worked it by myself.

par(mar = c(0, 0, 0, 0))
plot(
  st_geometry(world %>%
    st_transform(2163)),
  xlim = c(-2500000, 2500000),
  ylim = c(-2300000, 730000),
  col = "#F6E1B9",
  border = "#646464",
  bg = "#C6ECFF"
)
plot(
  st_geometry(USA) %>% st_transform(2163),
  col = "#FEFEE9",
  border = "black",
  add = T
)
# Alaska
par(
  fig = c(0.01, 0.28, 0.01, 0.33),
  new = TRUE
)
plot(
  st_geometry(world %>%
    st_transform(3467)),
  xlim = c(-2400000, 1600000),
  ylim = c(200000, 2500000),
  col = "#F6E1B9",
  border = "#646464",
  bg = "#C6ECFF"
)
plot(
  st_geometry(USA) %>% st_transform(3467),
  col = "#FEFEE9",
  border = "black",
  add = T
)
box(which = "figure", lwd = 1)

# Hawaii
par(
  fig = c(0.29, 0.45, 0.01, 0.15),
  new = TRUE
)
plot(
  st_geometry(world %>%
    st_transform(4135)),
  xlim = c(-161, -154),
  ylim = c(18, 23),
  col = "#F6E1B9",
  border = "#646464",
  bg = "#C6ECFF"
)
plot(
  st_geometry(USA) %>% st_transform(4135),
  col = "#FEFEE9",
  border = "black",
  add = T
)

box(which = "figure", lwd = 1)

plot of chunk 20191108_inset

Results may vary depending of the size of the original plot (Mainland USA) and your plotting device and output. However with a bit of trial-and-error it is quite easy to adjust the final result.

As an example, see one of my contributions to Wikimedia Commons that represents a map of the NUTS3 regions of the European Union. Several countries (France, Portugal, Spain) have overseas territories so I made a few insets on the right side.

]]>
https://dieghernan.github.io/201911_QuickR/ posts flagsinsetmapsr_bloggersrnaturalearthrspatialrstatssf https://dieghernan.github.io/201911_QuickR/ Thu, 07 Nov 2019 00:00:00 +0100
Wikipedia Maps (I): Organ donor rates A choropleth map with R

2 min.

This is a quick post on how to create a map as per the Wikipedia conventions. In this case I have chosen to plot the international organ donor rates, retrieved on 2019-02-10 although the data refers to 2017 (Source: IRODaT).

Webscrapping

First step is to webscrap Wikipedia in order to get the final table. For doing so, I will use the rvest library. You can get the xpath you want to webscrap as explained here.

library(rvest)
library(dplyr)

Base <-
  read_html("https://en.wikipedia.org/wiki/International_organ_donor_rates") %>%
  html_nodes(xpath = '//*[@id="mw-content-text"]/div/table[3]') %>%
  html_table() %>%
  as.data.frame(
    stringsAsFactors = F,
    fix.empty.names = F
  ) %>%
  select(Country,
    RateDonperMill = Number.of.deceased.donors..per.million.of.population
  )

knitr::kable(head(Base, 10), format = "markdown")
Country RateDonperMill
Argentina 19.60
Armenia 0.00
Australia 21.60
Austria 23.88
Azerbaijan 0.00
Bahrain 4.00
Bangladesh 0.00
Belarus 26.20
Belgium 30.30
Bolivia 0.36

Now we need to choose a good source of maps:

library(sf)
library(giscoR)

# Map import from Eurostat
WorldMap <- gisco_get_countries(resolution = 3, epsg = 3857) %>%
  select(ISO_3166_3 = ISO3_CODE)

Merging all together

Now let’s join and have a look to see what is going on. We use the countrycode package to retrieve the ISO3 codes of our scrapped dataset:

library(countrycode)

Base$ISO_3166_3 <- countrycode(Base$Country, origin = "country.name", destination = "iso3c")


DonorRate <- left_join(
  WorldMap,
  Base
) %>%
  select(
    Country,
    ISO_3166_3,
    RateDonperMill
  )

Make the .svg file

As already explained, I would like to follow the Wikipedia conventions, so some things to bear in mind:

  • Obviously the colors. Wikipedia already provides a good guidance for this. I would make use of the RColorBrewer library, which implements ColorBrewer in R.
  • In terms of projection, Wikipedia recommends the Equirectangular projection but, as in their own sample of a gradient map, I would choose to use the Robinson projection.
  • I should produce an .svg file following also the naming convention.

Some libraries then to use: RColorBrewer, rsvg and specially one of my favourites, cartography:

library(RColorBrewer)
library(cartography)
library(rsvg)

# Create bbox of the world
bbox <- st_linestring(rbind(
  c(-180, 90),
  c(180, 90),
  c(180, -90),
  c(-180, -90),
  c(-180, 90)
)) %>%
  st_segmentize(5) %>%
  st_cast("POLYGON") %>%
  st_sfc(crs = 4326) %>%
  st_transform(crs = "+proj=robin")

# Create SVG
svg(
  "Organ donor rate per million by country gradient map (2017).svg",
  pointsize = 90,
  width =  1600 / 90,
  height = 728 / 90
)

par(mar = c(0.5, 0, 0, 0))
choroLayer(
  DonorRate %>% st_transform("+proj=robin"),
  var = "RateDonperMill",
  breaks = c(0, 5, 10, 20, 30, 40, 50),
  col = brewer.pal(6, "PuBu"),
  border = "#646464",
  lwd = 0.1,
  colNA = "#E0E0E0",
  legend.pos = "left",
  legend.title.txt = "",
  legend.values.cex = 0.25
)

# Bounding box
plot(bbox,
  add = T,
  border = "#646464",
  lwd = 0.2
)

dev.off()

And that’s all. Our .svg file is ready to be included in Wikipedia.

img

Update: The map is already part of the Wikipedia article.

]]>
https://dieghernan.github.io/201910_WikiMap1/ posts WikipediacartographygiscoRmapsr_bloggersrspatialrstatssfsvg https://dieghernan.github.io/201910_WikiMap1/ Wed, 16 Oct 2019 00:00:00 +0200
Beautiful Maps with R (II): Fun with flags Put a picture on your map

3 min.

Updated 29 december 2020: All these pieces of work are already available on cartography >v.2.4.0 on functions getPngLayer. Just install it via install.packages("cartography"). A dedicated blog post with examples on this link.

Updated 25 January 2023 cartography is in maintenance mode. You can use rasterpic + tidyterra to achieve the same result, see link1 and link2.

Want to use a flag (or any *.png file) as a background of your map? You are in the right post. I am aware that there are some R packages out there, but we focus here in the option provided by cartography::getPngLayer(), that basically converts your image into a raster (see also this article of Paul Murrell, “Raster Images in R Graphics” (The R Journal, Volume 3/1, June 2011)).

Required R packages

library(dplyr)
library(sf)
library(cartography)
library(mapSpain)
library(giscoR)

Choosing a good source for our shape

In this post I am going to plot a map of Spain with its autonomous communities (plus 2 autonomous cities), that is the first-level administrative division of the country. Wikipedia shows an initial map identifying also the flag of each territory.

For that, I will use mapSpain, that uses information from giscoR, whose source is the geodata available in Eurostat. I would also use giscoR to get the world around Spain.

Spain <- esp_get_ccaa(epsg = 3857, res = 3)

World <- gisco_get_countries(epsg = 3857, res = 3)

bboxcan <- esp_get_can_box(epsg = 3857)

# Plot
par(mar = c(0, 0, 0, 0))
plot(st_geometry(Spain),
  col = NA,
  border = NA,
  bg = "#C6ECFF"
)
plot(st_geometry(World),
  col = "#E0E0E0",
  bg = "#C6ECFF",
  add = T
)
plot(st_geometry(Spain), col = "#FEFEE9", add = T)
layoutLayer(
  title = "",
  frame = FALSE,
  scale = 500,
  sources = gisco_attributions(),
  author = "dieghernan, 2019",
)
plot(bboxcan, add = TRUE)

plot of chunk 20190618_preparing

Now we have it! A nice map of Spain with a layout based on the Wikipedia convention for location maps.

Loading the flag

As a first example, I chose Asturias to build my code. So the goal here is to create a RasterBrick from the desired *.png file, add the necessary geographical information and use the shape of Asturias to crop the flag.

# 1.Shape---
shp <- Spain %>% filter(ccaa.shortname.es == "Asturias")

# 2.Get flag---

# Masked
url <- "https://upload.wikimedia.org/wikipedia/commons/thumb/3/3e/Flag_of_Asturias.svg/800px-Flag_of_Asturias.svg.png"

flagnomask <- getPngLayer(shp, url, mask = FALSE)

flagmask <- getPngLayer(shp, url, mask = TRUE)

opar <- par(no.readonly = TRUE)
par(mar = c(1, 1, 1, 1), mfrow = c(1, 2))
pngLayer(flagnomask)
plot(st_geometry(Spain), add = T)

# 4.Mask---
pngLayer(flagmask)
plot(st_geometry(Spain), add = T)

plot of chunk 20190618_Asturias

par(opar)

Pro tip: Use high-quality *.png, otherwise the plot would look quite poor. Here I show an extreme example.

MURshp <- Spain %>% filter(ccaa.shortname.es == "Murcia")
MURLow <- getPngLayer(
  MURshp,
  "https://upload.wikimedia.org/wikipedia/commons/thumb/a/a5/Flag_of_the_Region_of_Murcia.svg/100px-Flag_of_the_Region_of_Murcia.svg.png"
)
MURHigh <- getPngLayer(
  MURshp,
  "https://upload.wikimedia.org/wikipedia/commons/thumb/a/a5/Flag_of_the_Region_of_Murcia.svg/1200px-Flag_of_the_Region_of_Murcia.svg.png"
)


# Plot and compare
opar <- par(no.readonly = TRUE)
par(mfrow = c(1, 2), mar = c(1, 1, 1, 1))
plot_sf(MURshp, main = "Low")
pngLayer(MURLow, add = TRUE)

plot_sf(MURshp, main = "High")
pngLayer(MURHigh, add = TRUE)

plot of chunk 20190618_svg

par(opar)

Now, we are ready to have fun with flags. It’s time to make the flag map of the autonomous communities of Spain.

par(mar = c(0, 0, 0, 0), mfrow = c(1, 1))
plot(Spain %>%
  st_geometry(),
col = NA,
border = NA,
bg = "#C6ECFF"
)
plot(st_geometry(World),
  col = "#E0E0E0",
  add = T
)
plot(st_geometry(bboxcan),
  add = T
)
layoutLayer(
  title = "",
  frame = FALSE,
  sources = "© EuroGeographics for the administrative boundaries",
  author = "dieghernan, 2019",
)
# Andalucia
flag <-
  "https://upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Bandera_de_Andalucia.svg/1000px-Bandera_de_Andalucia.svg.png"
shp <- Spain %>% filter(ccaa.shortname.es == "Andalucía")
pngLayer(getPngLayer(shp, flag), add = TRUE)

# ...more flags
# Go to the source code of this post on GitHub for the full code

plot(st_geometry(Spain),
  col = NA,
  lwd = 2,
  add = T
)

plot of chunk 20190618_allCCAA

We are done now. If you have suggestion you can leave a comment. As always, if you enjoyed the post you can share it on your preferred social network.

]]>
https://dieghernan.github.io/201906_Beautiful2/ posts beautiful_mapsflagsfunctiongiscoRmapSpainmapsr_bloggersrasterrspatialrstats https://dieghernan.github.io/201906_Beautiful2/ Tue, 18 Jun 2019 00:00:00 +0200
Beautiful Maps with R (I): Fishnets, Honeycombs and Pixels Awesome simplified maps with R

5 min.

Sometimes you want to produce maps with special layouts. I specially like maps with modified geometries, e.g. simplifiying the original shapes to squares or dots. When doing a little search over the web I found the fantastic post Fishnets and Honeycomb: Square vs. Hexagonal Spatial Grids by Matt Strimas-Mackey that was a huge inspiration (by the way, don’t miss his blog, full of very interesting pieces of work).

The only thing I was not completely comfortable with was that the post used the old-fashioned sp package instead of my personal fav, the sf package, (the post was published in 2016, note that author has also moved to the sf package since then). So I decided to explore further options with sf.

The approach is similar (using grids over a map and work with that) but using st_make_grid. I also expanded it by grouping the grids and also producing dots over the geometries. So basically I produced 5 variations of the map:

Type Replacement Grouped
Fishnet Square No
Puzzle Square Yes
Honeycomb Hexagon No
Hexbin Hexagon Yes
Pixel Dot No

Required R packages

library(sf)
library(giscoR)
library(dplyr)
library(RColorBrewer)

Working with square grids

Let’s use the square option of st_make_grid and play a bit with it.

GB <- ne_download(50,
  type = "map_subunits",
  returnclass = "sf",
  destdir = tempdir()
) %>%
  subset(CONTINENT == "Europe") %>%
  subset(ADM0_A3 == "GBR")

# Projecting and cleaning
GB <- st_transform(GB, 3857) %>% select(NAME_EN, ADM0_A3)
initial <- GB
initial$index_target <- 1:nrow(initial)
target <- st_geometry(initial)

# Create my own color palette
mypal <- colorRampPalette(c("#F3F8F8", "#008080"))

Warning: The cellsize should be established in the same unit that the projection (in this case is meters). Pay special attention on this, given that if the parameter is too small (meaning that the grid is too dense) R could crash easily.

grid <- st_make_grid(target,
  50 * 1000,
  # Kms
  crs = st_crs(initial),
  what = "polygons",
  square = TRUE
)

# To sf
grid <- st_sf(index = 1:length(lengths(grid)), grid) # Add index

# We identify the grids that belongs to a entity by assessing the centroid
cent_grid <- st_centroid(grid)
cent_merge <- st_join(cent_grid, initial["index_target"], left = F)
grid_new <- inner_join(grid, st_drop_geometry(cent_merge))

# Fishnet
Fishgeom <- aggregate(grid_new,
  by = list(grid_new$index_target),
  FUN = min,
  do_union = FALSE
)

# Lets add the df
Fishnet <- left_join(
  Fishgeom %>% select(index_target),
  st_drop_geometry(initial)
) %>%
  select(-index_target)

# Now lets create the Puzzle
Puzzlegeom <- aggregate(st_buffer(grid_new, 0.5), # Avoid slivers
  by = list(grid_new$index_target),
  FUN = min,
  do_union = TRUE
) # This changes!!!

Puzzle <- left_join(
  Puzzlegeom %>% select(index_target),
  st_drop_geometry(initial)
) %>%
  select(-index_target)

# Plot
par(mfrow = c(1, 2), mar = c(1, 1, 1, 1), bg = NA)
plot(st_geometry(Fishnet), col = mypal(4), main = "Fishnet")
plot(st_geometry(Puzzle), col = mypal(4), main = "Puzzle")

plot of chunk 20190602_squares

Going hex

Extremely easy. We just need to change the square parameter of st_make_grid from TRUE to FALSE

grid <- st_make_grid(target,
  50 * 1000, # Kms
  crs = st_crs(initial),
  what = "polygons",
  square = FALSE # This is the only piece that changes!!!
)
# Make sf
grid <- st_sf(index = 1:length(lengths(grid)), grid) # Add index

# We identify the grids that belongs to a entity by assessing the centroid
cent_grid <- st_centroid(grid)
cent_merge <- st_join(cent_grid, initial["index_target"], left = F)
grid_new <- inner_join(grid, st_drop_geometry(cent_merge))

# Honeycomb
Honeygeom <- aggregate(
  grid_new,
  by = list(grid_new$index_target),
  FUN = min,
  do_union = FALSE
)

# Lets add the df
Honeycomb <- left_join(
  Honeygeom %>%
    select(index_target),
  st_drop_geometry(initial)
) %>%
  select(-index_target)

# Now lets create the Hexbin

Hexbingeom <- aggregate(
  st_buffer(grid_new, 0.5), # Avoid slivers
  by = list(grid_new$index_target),
  FUN = min,
  do_union = TRUE
)

Hexbin <- left_join(
  Hexbingeom %>%
    select(index_target),
  st_drop_geometry(initial)
) %>%
  select(-index_target)

# Plot
par(mfrow = c(1, 2), mar = c(1, 1, 1, 1), bg = NA)
plot(st_geometry(Honeycomb), col = mypal(4), main = "Honeycomb")
plot(st_geometry(Hexbin), col = mypal(4), main = "Hexbin")

plot of chunk 20190602_hex

Pixel it!

Also quite easy, just a couple of tweaks more, always using st_make_grid.

grid <- st_make_grid(target,
  50 * 1000, # Kms
  crs = st_crs(initial),
  what = "centers"
)

# Make sf
grid <- st_sf(index = 1:length(lengths(grid)), grid) # Add index

# We identify the grids that belongs to a entity by assessing the centroid
cent_grid <- st_centroid(grid)
cent_merge <- st_join(cent_grid, initial["index_target"], left = F)
grid_new <- st_buffer(cent_merge, 50 * 1000 / 2)
Pixelgeom <- aggregate(
  grid_new,
  by = list(grid_new$index_target),
  FUN = min,
  do_union = FALSE
)
# Lets add the df
Pixel <- left_join(
  Pixelgeom %>%
    select(index_target),
  st_drop_geometry(initial)
) %>%
  select(-index_target)

# Plot
par(mfrow = c(1, 1), mar = c(1, 1, 1, 1), bg = NA)
plot(st_geometry(Pixel), col = mypal(4), main = "Pixel")

plot of chunk 20190602_pix

Wrap up

So finally I wrapped all that in a function (see the code in my repo), that I named stdh_gridpol:

stdh_gridpol <- function(sf,
                         to = "fishnet",
                         gridsize = as.integer(
                           min(
                             diff(st_bbox(sf)[c(1, 3)]),
                             diff(st_bbox(sf)[c(2, 4)])
                           ) / 40
                         ),
                         sliver = 0.5) {
  if (!unique(st_geometry_type(sf)) %in% c("POLYGON", "MULTIPOLYGON")) {
    stop("Input should be  MULTIPOLYGON or POLYGON")
  }
  if (!to %in% c("fishnet", "puzzle", "honeycomb", "hexbin", "pixel")) {
    stop("'to' should be 'fishnet','puzzle','honeycomb','hexbin' or 'pixel'")
  }

  if (class(sf)[1] == "sf") {
    initial <- sf
    initial$index_target <- 1:nrow(initial)
  } else {
    initial <- st_sf(index_target = 1:length(sf), geom = sf)
  }

  target <- st_geometry(initial)

  if (to %in% c("fishnet", "puzzle")) {
    sq <- T
  } else {
    sq <- F
  }
  if (to == "pixel") {
    grid <- st_make_grid(target,
      gridsize,
      crs = st_crs(initial),
      what = "centers"
    )
  } else {
    grid <- st_make_grid(
      target,
      gridsize,
      crs = st_crs(initial),
      what = "polygons",
      square = sq
    )
  }
  grid <- st_sf(index = 1:length(lengths(grid)), grid) # Add index
  if (to == "pixel") {
    cent_merge <- st_join(grid, initial["index_target"], left = F)
    grid_new <- st_buffer(cent_merge, gridsize / 2)
  } else {
    cent_grid <- st_centroid(grid)
    cent_merge <- st_join(cent_grid, initial["index_target"], left = F)
    grid_new <- inner_join(grid, st_drop_geometry(cent_merge))
  }
  if (to %in% c("fishnet", "honeycomb", "pixel")) {
    geom <- aggregate(
      grid_new,
      by = list(grid_new$index_target),
      FUN = min,
      do_union = FALSE
    )
  } else {
    geom <- aggregate(
      st_buffer(grid_new, sliver),
      by = list(grid_new$index_target),
      FUN = min,
      do_union = TRUE
    )
  }
  if (class(initial)[1] == "sf") {
    fin <- left_join(
      geom %>% select(index_target),
      st_drop_geometry(initial)
    ) %>%
      select(-index_target)
    fin <- st_cast(fin, "MULTIPOLYGON")
    return(fin)
  } else {
    fin <- st_cast(geom, "MULTIPOLYGON")
    return(st_geometry(fin))
  }
}
# End of the function-----

fish <- stdh_gridpol(GB, to = "fishnet", gridsize = 50 * 1000)
puzz <- stdh_gridpol(GB, to = "puzzle", gridsize = 50 * 1000)
hon <- stdh_gridpol(GB, to = "honeycomb", gridsize = 50 * 1000)
hex <- stdh_gridpol(GB, to = "hexbin", gridsize = 50 * 1000)
pix <- stdh_gridpol(GB, to = "pixel", gridsize = 50 * 1000)

plot of chunk 20190602_functionex

And that’s it! The function stdh_gridpol has some alert mechanisms, as accepting only POLYGON or MULTIPOLYGON, and the default value of gridsize is computed in a way that the shortest dimension would have 40 cells.

]]>
https://dieghernan.github.io/201906_Beautiful1/ posts beautiful_mapsgiscoRmapsr_bloggersrspatialrstatssfR (I): Fishnets, Honeycombs and Pixels ]]> https://dieghernan.github.io/201906_Beautiful1/ Sun, 02 Jun 2019 00:00:00 +0200
Leaflet, R, Markdown, Jekyll and GitHub Make it work in 6 steps - a short tutorial

5 min.

Recently I have been struggling when trying to embed a leaflet map created with RStudio on my blog, hosted in GitHub via Jekyll (Spoiler: I succeeded ). In my case, I use the Chulapa remote theme created by myself.

Index

  1. The GitHub/Jekyll part
    1. 1. What to include
    2. 2.Where to include
  2. The RStudio part
    1. 3. Creating the leaflet map
    2. 4. Set up the YAML front matter
  3. The Markdown part
    1. 5. Modifying the .md file
    2. 6. Publish your post
  4. Gallery: Size of a leaflet map
    1. Fixed size
      1. Example 1: 672x480px
      2. Example 2: 200x300px
    2. Responsive size
      1. Example 3: 100% width

Ready? Let’s go!

The GitHub/Jekyll part

The first step is to install the requested libraries in your GitHub page. As Jekyll basically transforms markdown into html, this step is a matter of what to include and where in your own repository.

1. What to include

This part is not really hard. When having a look to the source code of Leaflet for R site it can be seen this chunk:

<head>
  <!--code-->
  
  <script src="libs/jquery/jquery.min.js"></script>
  <meta name="viewport" content="width=device-width, initial-scale=1" />
  <link href="libs/bootstrap/css/flatly.min.css" rel="stylesheet" />
  <script src="libs/bootstrap/js/bootstrap.min.js"></script>
  <script src="libs/bootstrap/shim/html5shiv.min.js"></script>
  
  ...
  <!--more libraries-->
  ...
  
  <link href="libs/rstudio_leaflet/rstudio_leaflet.css" rel="stylesheet" />
  <script src="libs/leaflet-binding/leaflet.js"></script>
  
  <!--code-->
</head>

So now we have it! The only thing to remember is that we need to load the libraries from the leaflet server (https://rstudio.github.io/leaflet), meaning that we have to prepend that url to the libraries in our installation:

  <script src="https://rstudio.github.io/leaflet/libs/jquery/jquery.min.js"></script>
  <meta name="viewport" content="width=device-width, initial-scale=1" />
  <link href="https://rstudio.github.io/leaflet/libs/bootstrap/css/flatly.min.css" rel="stylesheet" />
  
  ...
  <!--more libraries-->
  ...
  
  <link     href="https://rstudio.github.io/leaflet/libs/rstudio_leaflet/rstudio_leaflet.css" rel="stylesheet" />
  <script src= "https://rstudio.github.io/leaflet/libs/leaflet-binding/leaflet.js"></script>

You can have a look of my implementation on ./_includes/leaflet.html.

2.Where to include

This a little bit more complicated, depending on the structure of your Jekyll template. The code chunk should be included in the <head> section of your page, so you would need to find where to put it. In the case of Chulapa it is on ./_includes/custom/custom_head_before_css.html.

So now you just have to paste in the <head> the code that you got on step 1.

Pro tip: For a better performance of the site, include these libraries only when you need it. In my case, I added a custom variable in my YAML front matter for those posts with a leaflet map, leafletmap: true. Go to step 4 for a working example.

The RStudio part

3. Creating the leaflet map

Now it’s time to create a leaflet map with RStudio. I just keep it simple for this post, so I took the first example provided in Leaflet for R - Introduction

library(leaflet)

m <- leaflet() %>%
  addTiles() %>%  # Add default OpenStreetMap map tiles
  addMarkers(lng=174.768, lat=-36.852, popup="The birthplace of R")
m  # Print the map

It is assumed that you are creating a post with RStudio, so the code presented above should be embedded in an .Rmd file.

4. Set up the YAML front matter

Before knitting your .Rmd, you have to set up the YAML front matter. Here it is essential to set up the option always_allow_html: yes, as well as output: github_document. As an example, this post was created with the front matter:

---
title: Leaflet, <strong>R</strong>, Markdown, Jekyll and GitHub
subtitle: Make it work in 6 steps - a short tutorial
tags: [R,leaflet,Jekyll, html, maps]
header_img: https://dieghernan.github.io/assets/figs/20190520_imgpost.webp
leafletmap: true
always_allow_html: yes
last_modified_at: 2021-05-30
output: 
  md_document:
    variant: gfm
    preserve_yaml: true
---

We are almost there! Now “Knit” your code and get the corresponding .mdfile.

The Markdown part

5. Modifying the .md file

Update: This is not needed any more! I still leave it here for info. You can skip to the next section.

Have a look to the .md code that you have just created. Although not displayed in the preview, you can see in the file itself a chunk that looks like this:

  <script type="application/json"data-for="7ab57412f7b1df4d5773">
    {"x":{"options":
      ...
      "jsHooks":[]}
  </script>

Actually that chunk is your leaflet map, created with RStudio. You can’t see it now because you are previewing a markdown file in your local PC, and the libraries installed in step 1 are installed on GitHub, but we would solve it later.

Now you just need to paste this piece of code before that chunk:

<!--html_preserve-->
<div id="htmlwidget-7ab57412f7b1df4d5773" style="width:100%;height:216px;" class="leaflet html-widget"></div>
  <script type="application/json"data-for="htmlwidget-7ab57412f7b1df4d5773">
  ...

Warning: Be sure that the widget id (7ab57412f7b1df4d5773 in the example) is the same in the <div> and in the <script> part. If not your map would not load.

The style="width:100%; height:216px; part controls the actual size of the leaflet widget. In this case, the map would adapt to the width of the page with a fixed height of 216px. I put some examples at the end of the post of different size options so you can have a look and see which one is more suitable for your needs.

6. Publish your post

Now you just have to publish your post as usual!! If everything has been properly set, when Jekyll builds your post it would include the libraries in the header and make the magic happens, just like this:

Warning: Have you checked the YAML front matter of your .md file? Have another look, specially if you have followed my Pro tip.

A note: For a complete understanding of this section it is recommended to access it on multiple devices, so you can see the different behavior on different screens. Google Chrome allows you to simulate different devices (more info here).

Fixed size

With these examples you can see how to control the absolute size of the leaflet map. The disadvantage of this method is that the size would be fixed for all the devices, so maps sized for smartphones or tables wouldn’t look as nice in laptops, etc. and vice versa.

Example 1: 672x480px

Fixed size in pixels. By default in my machine:

leaflet(options = leafletOptions(minZoom = 1.25, maxZoom = 8)) %>%
  addTiles() %>%
  setMaxBounds(-200, -90, 200, 90) %>%
  setView(-3.56948, 40.49181, zoom = 3)
Example 2: 200x300px

Let’s go narrow and long with html "width:200px;height:300px;":

leaflet(
  options = leafletOptions(minZoom = 1.25, maxZoom = 8),
  width = "200px", height = "300px"
) %>%
  addTiles() %>%
  setMaxBounds(-200, -90, 200, 90) %>%
  setView(-3.56948, 40.49181, zoom = 3)

Responsive size

Recommended option. These maps would adapt to the width of your screen, no matter what device you are using.

Example 3: 100% width
leaflet(
  options = leafletOptions(minZoom = 1.25, maxZoom = 8),
  width = "100%"
) %>%
  addTiles() %>%
  setMaxBounds(-200, -90, 200, 90) %>%
  setView(-3.56948, 40.49181, zoom = 3)
]]>
https://dieghernan.github.io/201905_Leaflet_R_Jekyll/ posts htmljekyllleafletmapsr_bloggersrspatialrstatsR, Markdown, Jekyll and GitHub ]]> https://dieghernan.github.io/201905_Leaflet_R_Jekyll/ Mon, 20 May 2019 00:00:00 +0200
Where in the world? A leaflet map with the places I have flown

1 min.

This is a very personal post, where I just show the map of all the places I have traveled by plain

✈️ 251,536.4 kms. flown so far.

Top Cities

City Country N
Bilbao Spain 62
London United Kingdom 23
Edinburgh United Kingdom 21
Barcelona Spain 16
Brussels Belgium 8
Berlin Germany 6
Munich Germany 6
Palma de Mallorca Spain 6
Amsterdam Netherlands 4
Frankfurt Germany 4

Top Countries

Country Continent N
Spain Europe 97
United Kingdom Europe 46
Germany Europe 16
Belgium Europe 8
France Europe 6

Top Continents

Continent Region N
Europe Southern Europe 101
Europe Northern Europe 52
Europe Western Europe 36
Asia Western Asia 6
North America Northern America 6
]]>
https://dieghernan.github.io/201905_Where-in-the-world/ posts leafletmapsrspatialrstats https://dieghernan.github.io/201905_Where-in-the-world/ Mon, 13 May 2019 00:00:00 +0200
Cast a line to subsegments in R User-defined function using sf package

3 min.

This post introduces a used-defined function used for casting sf objects of class LINESTRING or POLYGON into sub-strings.

Required R packages

library(sf)
library(rnaturalearth)
library(dplyr)

The problem

The sfpackage includes st_cast, a very powerful function that transforms geometries into other different types of geometries (i.e. LINESTRINGto POLYGON, etc.).

italy <- ne_countries(country = "italy", returnclass = "sf")
italy_pol <- italy %>% st_cast("POLYGON")
italy_lin <- italy_pol %>% st_cast("LINESTRING")
italy_pt <- italy_lin %>% st_cast("POINT")
par(mfrow = c(2, 2), mar = c(1, 1, 1, 1), bg = NA)
plot(st_geometry(italy), col = c("red", "yellow", "blue"), main = "MULTIPOLYGON")
plot(st_geometry(italy_pol), col = c("red", "yellow", "blue"), main = "POLYGON")
plot(st_geometry(italy_lin), col = c("red", "yellow", "blue"), main = "LINE")
plot(st_geometry(italy_pt), col = c("red", "yellow", "blue"), main = "POINT")

plot of chunk 20190505_italycast

What I missed when using st_cast is the possibility to “break” the LINESTRING objects into sub-segments:

plot of chunk 20190505_italycastsub

An approach

So one possible solution could be to create LINESTRING objects for each consecutive pair of POINT objects across the original geometry. Let’s check it:

par(mfrow = c(1, 2), mar = c(1, 1, 1, 1))
test <- ne_countries(country = "spain", returnclass = "sf") %>%
  st_cast("POLYGON") %>%
  st_cast("LINESTRING")
plot(st_geometry(test), col = c("red", "yellow", "blue"), main = "LINESTRING")

geom <- lapply(
  1:(length(st_coordinates(test)[, 1]) - 1),
  function(i) {
    rbind(
      as.numeric(st_coordinates(test)[i, 1:2]),
      as.numeric(st_coordinates(test)[i + 1, 1:2])
    )
  }
) %>%
  st_multilinestring() %>%
  st_sfc(crs = st_crs(test)) %>%
  st_cast("LINESTRING")
plot(st_geometry(geom), col = c("red", "yellow", "blue"), main = "AFTER FUNCTION")

plot of chunk 20190505_testspain

The function stdh_cast_substring

Finally, I wrapped the solution into a function and extended it a little bit:

  • When the input is not a LINESTRING or a POLYGON returns an error and stops.

  • The function accepts sf with several rows or sfc objects with several geometries, and returns the same class of input. In the case of sf objects, the input data.frame is added.

  • By default, the output is a MULTILINESTRING geometry. This has the benefit that output has the same number of geometries than the input. This can be modified setting the parameter to as LINESTRING, that in fact only casts the MULTILINESTRING object into LINESTRING.

stdh_cast_substring <- function(x, to = "MULTILINESTRING") {
  ggg <- st_geometry(x)

  if (!unique(st_geometry_type(ggg)) %in% c("POLYGON", "LINESTRING")) {
    stop("Input should be  LINESTRING or POLYGON")
  }
  for (k in 1:length(st_geometry(ggg))) {
    sub <- ggg[k]
    geom <- lapply(
      1:(length(st_coordinates(sub)[, 1]) - 1),
      function(i) {
        rbind(
          as.numeric(st_coordinates(sub)[i, 1:2]),
          as.numeric(st_coordinates(sub)[i + 1, 1:2])
        )
      }
    ) %>%
      st_multilinestring() %>%
      st_sfc()

    if (k == 1) {
      endgeom <- geom
    }
    else {
      endgeom <- rbind(endgeom, geom)
    }
  }
  endgeom <- endgeom %>% st_sfc(crs = st_crs(x))
  if (class(x)[1] == "sf") {
    endgeom <- st_set_geometry(x, endgeom)
  }

  if (to == "LINESTRING") {
    endgeom <- endgeom %>% st_cast("LINESTRING")
  }
  return(endgeom)
}

The function could be improved in terms of performance. Given that it works at a coordinate level, for high-resolution objects it has some degree of delay

test100 <- ne_countries(
  continent = "south america",
  returnclass = "sf"
) %>%
  st_cast("POLYGON")

test50 <- ne_countries(50,
  continent = "south america",
  returnclass = "sf"
) %>%
  st_cast("POLYGON")



init <- Sys.time()
t1 <- stdh_cast_substring(test100, "LINESTRING")
end <- Sys.time()
kable(end - init, format = "markdown")
x
0.1729319 secs
init <- Sys.time()
t2 <- stdh_cast_substring(test50, "LINESTRING")
end <- Sys.time()
kable(end - init, format = "markdown")
x
2.288558 secs
par(mfrow = c(1, 1), mar = c(0, 0, 0, 0))
plot(st_geometry(test50), col = NA, bg = "#C6ECFF")
plot(st_geometry(ne_countries(50, returnclass = "sf")), col = "#F6E1B9", border = "#646464", add = T)
plot(st_geometry(test50), col = "#FEFEE9", border = "#646464", add = T)
plot(st_geometry(t2), col = c("red", "yellow", "blue"), add = T, lwd = 0.5)

plot of chunk 20190505_benchmarkfunction

It can be seen a difference in terms of performance, noting that test100 has 15 polygons decomposed in 914 sub-strings while test50 has 80 polygons to 8,414 sub-strings. In that sense, the original st_cast is much faster, although this solution may work well in most cases.

]]>
https://dieghernan.github.io/201905_Cast-to-subsegments/ posts functionr_bloggersrnaturalearthrspatialrstatssf https://dieghernan.github.io/201905_Cast-to-subsegments/ Sun, 05 May 2019 00:00:00 +0200
Using CountryCodes database and sf package vignette of the CountryCodes project

2 min.

This vignette is an example of use of the database provided in the Github project Country Codes and International Organizations & Groups by using the sf package in R.

Required R packages

library(sf)
library(jsonlite)
library(rnaturalearth)
library(dplyr)

Reading the data

The first step consists on reading the database provided (in this example the json file) and extracting one international organization. In this example we will plot the Commonwealth of Nations.

df <- fromJSON("https://raw.githubusercontent.com/dieghernan/Country-Codes-and-International-Organizations/master/outputs/Countrycodesfull.json")
# Identify Commonwealth acronym
orgsdb <- read.csv("https://raw.githubusercontent.com/dieghernan/Country-Codes-and-International-Organizations/master/outputs/CountrycodesOrgs.csv") %>%
  distinct(org_id, org_name)

kable(orgsdb[grep("Common", orgsdb$org_name), ], format = "markdown")
  org_name org_id
25 Commonwealth C
26 Central American Common Market CACM
30 Caribbean Community and Common Market CARICOM
42 Commonwealth of Independent States CIS
43 Common Market for Eastern and Southern Africa COMESA
115 Southern Cone Common Market MERCOSUR

In our case, the value to search is C. It is provided also a function that extract the membership from the json database:

ISO_memcol <- function(df,
                       orgtosearch) {
  ind <- match(orgtosearch, unlist(df[1, "org_id"]))
  or <- lapply(1:nrow(df), function(x) {
    unlist(df[x, "org_member"])[ind]
  })
  or <- data.frame(matrix(unlist(or)), stringsAsFactors = F)
  names(or) <- orgtosearch
  df2 <- as.data.frame(cbind(df, or, stringsAsFactors = F))
  return(df2)
}
df_org <- ISO_memcol(df, "C")

Now df_org has a new column, named C, containing the membership status of each country.

df_org %>%
  count(C) %>%
  kable(format = "markdown")
C n
member 53
NA 222
df_org %>%
  filter(!is.na(C)) %>%
  select(
    ISO_3166_3,
    NAME.EN,
    C
  ) %>%
  head() %>%
  kable(format = "markdown")
ISO_3166_3 NAME.EN C
ATG Antigua & Barbuda member
AUS Australia member
BHS Bahamas member
BGD Bangladesh member
BRB Barbados member
BLZ Belize member

Replacing the data on a map.

In this example the rnaturalearth package is used for retrieving an sf object. The code below replaces the data.frame part of the sfobject. and replacing the dataframefor the dedicated database.

testmap <- ne_countries(50,
  "countries",
  returnclass = "sf"
) %>%
  select(ISO_3166_3 = adm0_a3) %>%
  full_join(df_org)

# We add also tiny countries
tiny <- ne_countries(50,
  "tiny_countries",
  returnclass = "sf"
) %>%
  select(ISO_3166_3 = adm0_a3) %>%
  full_join(df_org)

# Identify dependencies
ISOCommon <- df_org %>%
  filter(!is.na(C)) %>%
  select(
    ISO_3166_3.sov = ISO_3166_3,
    C_sov = C
  )
tiny <- left_join(tiny, ISOCommon)
tiny$C <- coalesce(tiny$C, tiny$C_sov)

Plotting map: Wikipedia style

Now we would try to plot a map resembling the one presented in the Wikipedia page for the Commonwealth.

Wiki

The map we will generate is presented under a Robinson projection and the color palette will be based in the Wikipedia convention for Orthographic Maps, since it is the one used in the example.

# Projecting the map
testmap_rob <- st_transform(testmap, "+proj=robin")
tiny_rob <- st_transform(tiny, "+proj=robin")

# Bounding box
bbox <- st_linestring(rbind(
  c(-180, 90),
  c(180, 90),
  c(180, -90),
  c(-180, -90),
  c(-180, 90)
)) %>%
  st_segmentize(5) %>%
  st_cast("POLYGON") %>%
  st_sfc(crs = 4326) %>%
  st_transform(crs = "+proj=robin")

# Plotting
par(mar = c(0, 0, 0, 0), bg = NA)
plot(bbox,
  col = "#FFFFFF",
  border = "#AAAAAA",
  lwd = 1.5
)
plot(
  st_geometry(testmap_rob),
  col = "#B9B9B9",
  border = "#FFFFFF",
  lwd = 0.1,
  add = T
)

plot(
  st_geometry(testmap_rob %>%
    filter(!is.na(C))),
  col = "#346733",
  border = "#FFFFFF",
  lwd = 0.1,
  add = T
)

# By last, add tiny countries
# All
plot(
  st_geometry(tiny_rob),
  col = "#000000",
  bg = "#B9B9B9",
  add = T,
  pch = 21
)
# Dependencies
plot(
  st_geometry(tiny_rob %>%
    filter(!is.na(C)) %>%
    filter(!is.na(ISO_3166_3.sov))),
  bg = "#C6DEBD",
  col = "#000000",
  pch = 21,
  add = T
)
# Independent
plot(
  st_geometry(tiny_rob %>%
    filter(!is.na(C)) %>%
    filter(is.na(ISO_3166_3.sov))),
  bg = "#346733",
  col = "#000000",
  pch = 21,
  add = T
)
plot(bbox,
  col = NA,
  border = "#AAAAAA",
  lwd = 1.5,
  add = T
)

plot of chunk 20190427_mapfin

]]>
https://dieghernan.github.io/201904_Using-CountryCodes/ posts mapsr_bloggersrnaturalearthrspatialrstatssfvignette https://dieghernan.github.io/201904_Using-CountryCodes/ Sat, 27 Apr 2019 00:00:00 +0200
Country Codes &amp; Organizations A database with geocodes

2 min.

Complete database of countries and territories, their different country codes under common standards (ISO-3166, GEC (Formerly FIPS), M49 (UN), STANAG (NATO), NUTS (EU), etc.) and their membership in different international organizations.

Note that blanks are presented as "" instead of NA since ISO-3166-ALPHA 2 for Namibia is NA.

vignette: Using Country Codes

A. Country Codes .csv

Main .csv file (Link) containing:

  • Country and regional codes
  • Currency, dependency status ans sovereignty info
  • Names in english and spanish as provided by Unicode CLDR
  • Additional information (demographics, capital, area, etc.)

Codes included

Field Description Source Notes
ISO_3166_1 ISO 3166-1 numeric Wikipedia  
ISO_3166_2 ISO 3166-1 alpha-2 Wikipedia  
ISO_3166_3 ISO 3166-1 alpha-3 Wikipedia  
FIPS_GEC Geopolitical Entities and Codes (GEC) CIA World Factbook Formerly FIPS 1PUB 10-4
STANAG STANAG 1059 Country Codes CIA World Factbook Used by NATO
M49 UN Country Code UN Stats  
NUTS NUTS 0 code Wikipedia Used by EU
geonameId geonameId geonames  
continentcode geonames Continent Code geonames  
regioncode UN Regional Code UN Stats  
interregioncode Interregional Code UN Stats  
subregioncode Subregion Code UN Stats  
ISO_3166_3.sov Sovereign code Wikipedia, Statoids If non-independent

Other information included

  • Currency
  • Dependency status
  • Names in english and spanish: Country, Continents & Regions, capital
  • Population, area (km2) and developed region

B. International Organizations .csv

A single .csv file (Link) describing the membership status of each country across 186 international organizations.

Field Description
ISO_3166_2 Matches with Countrycodes .csv
ISO_3166_3 Matches with Countrycodes .csv
NAME.EN Matches with Countrycodes .csv
source Main data source
org_name Name of the organization
org_id Abbreviation or internal ID
org_member Membership status

C. Full json file .json

This .json file (Link) combines the previous files:

[
  ...
  {
    "ISO_3166_1": 12,
    "ISO_3166_2": "DZ",
    "ISO_3166_3": "DZA",
    "ISO_Official": true,
    "FIPS_GEC": "AG",
    "STANAG": "DZA",
    "M49": 12,
    "geonameId": 2589581,
    "continentcode": "AF",
    "regioncode": 2,
    "subregioncode": 15,
    "currency": "DZD",
    "independent": true,
    "NAME.EN": "Algeria",
    "CONTINENT.EN": "Africa",
    "REGION.EN": "Africa",
    "SUBREGION.EN": "Northern Africa",
    "CAPITAL.EN": "Algiers",
    "NAME.ES": "Argelia",
    "CONTINENT.ES": "Africa",
    "REGION.ES": "África",
    "SUBREGION.ES": "África septentrional",
    "CAPITAL.ES": "Argel",
    "pop": 34586184,
    "area_km2": 2381740,
    "Developed": "Developing",
    "org_id": ["ABEDA", "ACP", "ADB", "AFDB", "AFESD", "AG", "AL", 
    ...
    ],
    "org_member": ["member", null, null, "member", "member", null,
    "member",
    ...
    ]
  },
  ...
]

A complementary function (intended to be used in R) has been developed:

ISO_memcol = function(df, #Input dataframe
                      orgtosearch #org id
) {
  ind = match(orgtosearch, unlist(df[1, "org_id"]))
  or = lapply(1:nrow(df), function(x)
    unlist(df[x, "org_member"])[ind])
  or = data.frame(matrix(unlist(or)), stringsAsFactors = F)
  names(or) = orgtosearch
  df2 = as.data.frame(cbind(df, or, stringsAsFactors = F))
  return(df2)
}

D. Data sources

]]>
https://dieghernan.github.io/projects/Country-Codes-and-International-Organizations/ projects Rcsvdatasetjsonprojectwebscrapping https://dieghernan.github.io/projects/Country-Codes-and-International-Organizations/ Thu, 11 Apr 2019 00:00:00 +0200
Bzel A Pebble project

1 min.

Project discontinued due to the shutdown of Pebble.

Bzel intregates the bezel into your watchface. Display minutes as digits, as a moving dot or as a fill in the bezel.

Banner

Features

  • Clock mode:
    • Digital: Minute display based on analog movement
    • Dot: Moving dot as minute marker
    • Bezel: A bar moving around the bezel as minute marker
  • Autodetection of 12h/24h based on your watch settings

Take your pick

  • Pebble Health: Display daily steps.
  • Date - Get the weekday based on the language set on your Pebble.
  • Weather: Current conditions on °c or °f.
  • Choose your weather provider:
  • Implementation of pmkey.xyz
  • Location, based on your selected weather provider
  • Night theme displayed between sunset and sunrise

Internationalization

Autotranslating of weekday supported for:

  • English
  • Spanish
  • German
  • French
  • Portuguese
  • Italian

Future developments

  • Location for weather and loc
  • Square support
  • New Minute Mode: Bezel
  • Steps
  • More Health Metrics

Screenshots

gif
gif
gif

Attributions

Fonts

  • Weather Icons by Eric Flowers, modified and fitted to regular alphabet, instead of Unicode values.
  • Custom font for icons created via Fontastic.
  • Gotham Fonts] downloaded from fontsgeek.com

Weather providers

wp
wp
wp

Others

Master Key is a service for Pebble users. Get a unique PIN and add API Keys for your favorite online services. Please check www.pmkey.xyz for more info.

License

Developed under license MIT.

Made in Madrid, Spain ❤️

]]>
https://dieghernan.github.io/projects/Bzel/ projects Cdiscontinuedjavascriptpebbleprojectwatchface https://dieghernan.github.io/projects/Bzel/ Thu, 25 May 2017 00:00:00 +0200
7egment A Pebble project

1 min.

Project discontinued due to the shutdown of Pebble.

7egment is a customizable watchface based on the classic 7-segment display that adds your location and the current weather information in the language used on your watch and smartphone.

Banner

Features

  • Autodetection of 12h/24h based on your watch settings
  • Internationalization: Autotranslating of weekday supported for:
    • English
    • Spanish
    • German
    • French
    • Portuguese
    • Italian

Options

  • Choose background colors, frame and text
  • 3 Bands design. Make it match your tie!
  • Weather: Current conditions on °c or °f.
  • Choose your weather provider:
  • Implementation of pmkey.xyz
  • Location, based on your selected weather provider
  • Bluetooth and GPS warnings
  • Night theme displayed between sunset and sunrise

Screenshots

gif
gif
gif

Attributions

Fonts:

Weather providers

wp
wp

Others

Master Key is a service for Pebble users. Get a unique PIN and add API Keys for your favorite online services. Please check www.pmkey.xyz for more info.

License

Developed under license MIT.

Made in Madrid, Spain ❤️

]]>
https://dieghernan.github.io/projects/7egment/ projects Cdiscontinuedjavascriptpebbleprojectwatchface https://dieghernan.github.io/projects/7egment/ Wed, 17 May 2017 00:00:00 +0200
Sfera A Pebble project

1 min.

Project discontinued due to the shutdown of Pebble.

Sfera for Pebble Time Round is a highly customizable watchface that gets the most of the smartwatch capabilities. Set your preferences and enjoy this beautifully designed watchface.

Banner

Features

  • Clock mode:
    • Analog: Classic analog watchface
    • Digital: Centered hour and minute display based on analog movement
    • Dual: Analog and Digital all in one
    • Mix: Digital Hour and Analogic Minute
  • Autodetection of 12h/24h based on your watch settings

Take your pick

  • Date - Get the weekday based on the language set on your Pebble.
  • Dots as minute markers - choose your color
  • Battery level displayed beautifully as an arc near the bezel. Choose your color and below 20% it turns red!
  • Weather: Current conditions on °c or °f.
  • Choose your weather provider:
  • Implementation of pmkey.xyz
  • Location, based on your selected weather provider
  • Night theme displayed between sunset and sunrise

Internationalization

Autotranslating of weekday supported for:

  • English
  • Spanish
  • German
  • French
  • Portuguese
  • Italian

Future developments

  • 12/24h mode
  • Night theme
  • Several weather providers available
  • pmkey.xyz implemented for easy managing your API keys

Screenshots

GIF

Attributions

Fonts:

  • Weather Icons by Eric Flowers, modified and fitted to regular alphabet, instead of Unicode values.
  • Custom font for icons created via Fontastic downloaded from fontsgeek.com

Weather providers

wp
wp
wp

Others

Master Key is a service for Pebble users. Get a unique PIN and add API Keys for your favorite online services. Please check www.pmkey.xyz for more info.

License

Developed under license MIT.

Made in Madrid, Spain ❤️

]]>
https://dieghernan.github.io/projects/Sfera/ projects Cdiscontinuedjavascriptpebbleprojectwatchface https://dieghernan.github.io/projects/Sfera/ Tue, 14 Mar 2017 00:00:00 +0100
TextWatch Clima A Pebble project

1 min.

Project discontinued due to the shutdown of Pebble.

TextWatch Clima upgrades the classic TextWatch watchface adding a bunch of new capabilities.

banner

Available for all the Classic, Time and Pebble 2 models

Features

  • Exact hour in natural language
  • Autofit to screen

banner

Take your pick

  • Date format: Day Month / Month Day
  • Fuzzy time option
  • Animation on text
  • Language
    • Spanish
    • English
    • German (thanks to rodher)
    • French
    • Italian
    • Portuguese
    • Norwegian
    • Danish
    • Swedish
    • Esperanto
    • Dutch
    • Catalan
    • More at request
  • Battery bar display
  • Weather: Current conditions on °c or °f.
  • Night theme displayed between sunset and sunrise
  • Choose your weather provider:
  • Implementation of pmkey.xyz

banner

Next developments

  • Fuzzy time
  • Battery
  • Option for animations
  • Option for non pmkey users
  • More languages (Swedish, Esperanto…)

Attributions

Fonts

  • Weather Icons by Eric Flowers, modified and fitted to regular alphabet, instead of Unicode values.
  • Custom font for icons created via Fontastic.
  • Gotham Fonts downloaded from fontsgeek.com

Weather providers

wp
wp
wp

Others

Screenshots

gif
gif
gif

License

Developed under license MIT.

Made in Madrid, Spain ❤️

]]>
https://dieghernan.github.io/projects/TextWatchClima/ projects Cdiscontinuedjavascriptpebbleprojectwatchface https://dieghernan.github.io/projects/TextWatchClima/ Thu, 16 Feb 2017 00:00:00 +0100