<![CDATA[Angular Space]]>https://www.angularspace.com/https://www.angularspace.com/favicon.pngAngular Spacehttps://www.angularspace.com/Ghost 6.22Fri, 13 Mar 2026 21:48:56 GMT60<![CDATA[Angular v20 Custom MatPaginator Styling]]>One of my more popular articles that I’ve published is Angular: MatPaginator Custom Styling, wrapping up around 17K views, which shows how to transform Angular Material’s paginator (on a mat-table) using a custom directive to make it look more appealing.

I decided to update this article

]]>
https://www.angularspace.com/angular-v20-custom-matpaginator-styling/6910ed68f8804d0001e097ccFri, 30 Jan 2026 09:30:45 GMT

One of my more popular articles that I’ve published is Angular: MatPaginator Custom Styling, wrapping up around 17K views, which shows how to transform Angular Material’s paginator (on a mat-table) using a custom directive to make it look more appealing.

I decided to update this article because since then, two major things have changed. It was originally written for Angular v14, and since then we’ve received the major Angular Material MDC components, which caused quite a few issues for people updating their projects, along with some smaller Angular improvements. Also, in my previous article, I was accessing some private methods of the MatPaginator component. In the GIF below, you can see the final example we’ll be building. You can also find the full source code in this GitHub repository.

Angular v20 Custom MatPaginator Styling
Angular Custom MatPaginator End Result

I will start this blog post by showing the full code section, you can always come back to it, and then we’ll go through some of the more interesting or not that obvious parts. You might already have a table component displaying data, something like this:

import { afterNextRender, Component, viewChild } from '@angular/core';
import { MatPaginator, MatPaginatorModule } from '@angular/material/paginator';
import { MatTableDataSource, MatTableModule } from '@angular/material/table';

type Data = { position: number; name: string; weight: number; symbol: string };

@Component({
  selector: 'app-test-table',
  imports: [MatTableModule, MatPaginatorModule, BubblePaginationDirective],
  template: `
    <table mat-table [dataSource]="dataSource">
      <!-- Position Column -->
      <ng-container matColumnDef="position">
        <th mat-header-cell *matHeaderCellDef>No.</th>
        <td mat-cell *matCellDef="let element">{{ element.position }}</td>
      </ng-container>

      <!-- ... more columns ... -->

      <tr mat-header-row *matHeaderRowDef="displayedColumns"></tr>
      <tr mat-row *matRowDef="let row; columns: displayedColumns"></tr>
    </table>

    <mat-paginator [length]="dataSource.data.length" [pageSize]="10" />
  `,
})
export class TestTableComponent {
  readonly dataSource = new MatTableDataSource<Data>([]);
  readonly paginator = viewChild(MatPaginator);

  readonly displayedColumns: string[] = ['position', 'name', 'weight', 'symbol'];

  constructor() {
    const data: Data[] = [];
    Array.from({ length: 100 }, (_, k) => k + 1).forEach(v => {
      data.push({ position: v, name: `Element ${v}`, weight: v * 1.5, symbol: `E${v}` });
    });

    this.dataSource.data = data;

    afterNextRender(() => {
      const paginator = this.paginator();

      if (paginator) {
        this.dataSource.paginator = paginator;
      }
    });
  }
}

With this simple configuration, your table looks as follows:

Angular v20 Custom MatPaginator Styling
Basic Table Basic Pagination

What we want to achieve is creating a directive that we can attach to the mat-paginator and it will transform the look of our paginated table into the bubble example. The usage of the directive will be the following:

<mat-paginator
  [appBubblePagination]="dataSource.data.length"
  (page)="onPageChange($event)"
  [length]="dataSource.data.length"
  [pageSize]="15">
</mat-paginator>

Here is the full code of the directive and below I describe some of its sections.

import { Directive, ElementRef, Renderer2, afterRenderEffect, inject, input, untracked } from '@angular/core';
import { MatPaginator } from '@angular/material/paginator';

/**
 * Works from angular-material version 15. since all classes got the new prefix 'mdc-'
 */
@Directive({
  selector: '[appBubblePagination]',
})
export class BubblePaginationDirective {
  private readonly matPag = inject(MatPaginator, {
    optional: true,
    self: true,
    host: true,
  });
  private readonly elementRef = inject(ElementRef);
  private readonly ren = inject(Renderer2);

  /**
   * whether we want to display first/last button and dots
   */
  readonly showFirstButton = input(true);
  readonly showLastButton = input(true);

  /**
   * total number of items in pagination
   * needed to calculate how many buttons to render
   * when page size changes
   */
  readonly paginationSize = input(0, {
    alias: 'appBubblePagination',
  });

  /**
   * how many buttons to display before and after
   * the selected button
   */
  readonly renderButtonsNumber = input(2);

  /**
   * references to DOM elements
   */
  private dotsEndRef!: HTMLElement;
  private dotsStartRef!: HTMLElement;
  private bubbleContainerRef!: HTMLElement;

  /**
   * ref to rendered buttons on UI that we can remove them size changes
   */
  private buttonsRef: HTMLElement[] = [];

  readonly buildButtonsEffect = afterRenderEffect(() => {
    // rebuild buttons when pagination size change
    this.paginationSize();

    untracked(() => {
      // remove buttons before creating new ones
      this.removeButtons();

      // set some default styles to mat pagination
      this.styleDefaultPagination();

      // create bubble container
      this.createBubbleDivRef();

      // create all buttons
      this.buildButtons();

      // switch back to page 0
      this.switchPage(0);
    });
  });

  /**
   * change the active button style to the current one and display/hide additional buttons
   * based on the navigated index
   */
  private changeActiveButtonStyles(previousIndex: number, newIndex: number) {
    const previouslyActive = this.buttonsRef[previousIndex];
    const currentActive = this.buttonsRef[newIndex];

    if (!previouslyActive && !currentActive) {
      return;
    }

    // remove active style from previously active button
    if (previouslyActive) {
      this.ren.removeClass(previouslyActive, 'g-bubble__active');
    }

    // add active style to new active button
    this.ren.addClass(currentActive, 'g-bubble__active');

    // hide all buttons
    this.buttonsRef.forEach(button => this.ren.setStyle(button, 'display', 'none'));

    // show N previous buttons and X next buttons
    const renderElements = this.renderButtonsNumber();
    const endDots = newIndex < this.buttonsRef.length - renderElements - 1;
    const startDots = newIndex - renderElements > 0;

    const firstButton = this.buttonsRef[0];
    const lastButton = this.buttonsRef[this.buttonsRef.length - 1];

    // last bubble and dots
    if (this.showLastButton()) {
      this.ren.setStyle(this.dotsEndRef, 'display', endDots ? 'block' : 'none');
      this.ren.setStyle(lastButton, 'display', endDots ? 'flex' : 'none');
    }

    // first bubble and dots
    if (this.showFirstButton()) {
      this.ren.setStyle(this.dotsStartRef, 'display', startDots ? 'block' : 'none');
      this.ren.setStyle(firstButton, 'display', startDots ? 'flex' : 'none');
    }

    // resolve starting and ending index to show buttons
    const startingIndex = startDots ? newIndex - renderElements : 0;

    const endingIndex = endDots ? newIndex + renderElements : this.buttonsRef.length - 1;

    // display starting buttons
    for (let i = startingIndex; i <= endingIndex; i++) {
      const button = this.buttonsRef[i];
      this.ren.setStyle(button, 'display', 'flex');
    }
  }

  /**
   * Removes or change styling of some html elements
   */
  private styleDefaultPagination() {
    const nativeElement = this.elementRef.nativeElement;
    const itemsPerPage = nativeElement.querySelector('.mat-mdc-paginator-page-size');
    const howManyDisplayedEl = nativeElement.querySelector('.mat-mdc-paginator-range-label');
    const previousButton = nativeElement.querySelector('button.mat-mdc-paginator-navigation-previous');
    const nextButtonDefault = nativeElement.querySelector('button.mat-mdc-paginator-navigation-next');

    // remove 'items per page'
    if (itemsPerPage) {
      this.ren.setStyle(itemsPerPage, 'display', 'none');
    }

    // style text of how many elements are currently displayed
    if (howManyDisplayedEl) {
      this.ren.setStyle(howManyDisplayedEl, 'position', 'absolute');
      this.ren.setStyle(howManyDisplayedEl, 'color', '#919191');
      this.ren.setStyle(howManyDisplayedEl, 'font-size', '14px');
      this.ren.setStyle(howManyDisplayedEl, 'left', '-0');
    }

    // check whether to remove left & right default arrows
    this.ren.setStyle(previousButton, 'display', 'none');
    this.ren.setStyle(nextButtonDefault, 'display', 'none');
  }

  /**
   * creates `bubbleContainerRef` where all buttons will be rendered
   */
  private createBubbleDivRef(): void {
    const actionContainer = this.elementRef.nativeElement.querySelector('div.mat-mdc-paginator-range-actions');
    const nextButtonDefault = this.elementRef.nativeElement.querySelector('button.mat-mdc-paginator-navigation-next');

    // create a HTML element where all bubbles will be rendered
    this.bubbleContainerRef = this.ren.createElement('div') as HTMLElement;
    this.ren.addClass(this.bubbleContainerRef, 'g-bubble-container');

    // render element before the 'next button' is displayed
    this.ren.insertBefore(actionContainer, this.bubbleContainerRef, nextButtonDefault);
  }

  /**
   * helper function that builds all button and add dots
   * between the first button, the rest and the last button
   *
   * end result: (1) .... (4) (5) (6) ... (25)
   */
  private buildButtons(): void {
    if (!this.matPag) {
      return;
    }

    const neededButtons = Math.ceil(this.matPag.length / this.matPag.pageSize);

    // if there is only one page, do not render buttons
    if (neededButtons === 0 || neededButtons === 1) {
      this.ren.setStyle(this.elementRef.nativeElement, 'display', 'none');
      return;
    }

    // set back from hidden to block
    this.ren.setStyle(this.elementRef.nativeElement, 'display', 'block');

    // create first button
    this.buttonsRef = [this.createButton(0)];

    // add dots (....) to UI
    this.dotsStartRef = this.createDotsElement();

    // create all buttons needed for navigation (except the first & last one)
    for (let index = 1; index < neededButtons - 1; index++) {
      this.buttonsRef = [...this.buttonsRef, this.createButton(index)];
    }

    // add dots (....) to UI
    this.dotsEndRef = this.createDotsElement();

    // create last button to UI after the dots (....)
    this.buttonsRef = [...this.buttonsRef, this.createButton(neededButtons - 1)];
  }

  /**
   * Remove all buttons from DOM
   */
  private removeButtons(): void {
    this.buttonsRef.forEach(button => {
      this.ren.removeChild(this.bubbleContainerRef, button);
    });

    // remove dots
    if (this.dotsStartRef) {
      this.ren.removeChild(this.bubbleContainerRef, this.dotsStartRef);
    }
    if (this.dotsEndRef) {
      this.ren.removeChild(this.bubbleContainerRef, this.dotsEndRef);
    }

    // Empty state array
    this.buttonsRef.length = 0;
  }

  /**
   * create button HTML element
   */
  private createButton(i: number): HTMLElement {
    const bubbleButton = this.ren.createElement('div');
    const text = this.ren.createText(String(i + 1));

    // add class & text
    this.ren.addClass(bubbleButton, 'g-bubble');
    this.ren.setStyle(bubbleButton, 'margin-right', '8px');
    this.ren.appendChild(bubbleButton, text);

    // react on click
    this.ren.listen(bubbleButton, 'click', () => {
      this.switchPage(i);
    });

    // render on UI
    this.ren.appendChild(this.bubbleContainerRef, bubbleButton);

    // set style to hidden by default
    this.ren.setStyle(bubbleButton, 'display', 'none');

    return bubbleButton;
  }

  /**
   * helper function to create dots (....) on DOM indicating that there are
   * many more bubbles until the last one
   */
  private createDotsElement(): HTMLElement {
    const dotsEl = this.ren.createElement('span');
    const dotsText = this.ren.createText('.....');

    // add class
    this.ren.setStyle(dotsEl, 'font-size', '18px');
    this.ren.setStyle(dotsEl, 'margin-right', '8px');
    this.ren.setStyle(dotsEl, 'padding-top', '6px');
    this.ren.setStyle(dotsEl, 'color', '#919191');

    // append text to element
    this.ren.appendChild(dotsEl, dotsText);

    // render dots to UI
    this.ren.appendChild(this.bubbleContainerRef, dotsEl);

    // set style none by default
    this.ren.setStyle(dotsEl, 'display', 'none');

    return dotsEl;
  }

  /**
   * Helper function to switch page
   */
  private switchPage(i: number): void {
    if (!this.matPag) {
      return;
    }

    const previousPageIndex = this.matPag.pageIndex;

    // switch page index of mat paginator
    this.matPag.pageIndex = i;

    // change active button styles
    this.changeActiveButtonStyles(previousPageIndex, this.matPag.pageIndex);

    // need to trigger page event manually, because we are changing pageIndex programmatically
    this.matPag.page.emit({
      pageIndex: i,
      pageSize: this.matPag.pageSize,
      length: this.matPag.length,
      previousPageIndex: previousPageIndex,
    });
  }
}

Injecting Dependencies

inject(MatPaginator, { optional: true, self: true, host: true })
inject(ElementRef)
inject(Renderer2)
  • MatPaginator - we hook into its current pageIndex, pageSize, and its page stream
  • ElementRef - root element of the paginator (so we can query its internals)
  • Renderer2 - the safe way to create elements, set styles/classes, and listen to events. Works nicely with SSR and avoids direct DOM APIs

Input / Output Bindings

  • showFirstButton = input(true); & showLastButton = input(true); - whether to display first and last buttons for fast navigation on the start / end of the table
  • renderButtonsNumber = input(2); - when you are on an active page, let’s say index 6 , then how many buttons before and after should we display
  • paginationSize - this input is required to detect changes in the table length (after filtering or loading new data). When it changes, the directive re-renders the bubbles to match the correct number of pages.

Core Logic Execution

The directive itself is around 300 lines of code, but when you try to understand how it works, the main logic is inside the buildButtonsEffect effect, such as:

readonly buildButtonsEffect = afterRenderEffect(() => {
  // rebuild buttons when pagination size change
  this.paginationSize();

  untracked(() => {
    // remove buttons before creating new ones
    this.removeButtons();

    // set some default styles to mat pagination
    this.styleDefaultPagination();

    // create bubble container
    this.createBubbleDivRef();

    // create all buttons
    this.buildButtons();

    // switch back to page 0
    this.switchPage(0);
  });
});

What is great about the effect, is that, it will re-execute when length (size) of the elements change, so for example if you have one table, but doing server-side data filtering, then each time new data arrives, bubbles will be recalculated due to paginationSize signal input. The brief overview is described below.

  • paginationSize - listen on page size changes (on new data) to recalculate bubbles
  • removeButtons() – clear previous custom DOM (if re-running)
  • styleDefaultPagination() – hide certain default Material bits and position labels
  • createBubbleDivRef() – create a container where our bubbles will live
  • buildButtons() – create bubbles + dots based on total length and page size
  • switchPage(0) – start from page 0 to keep things predictable

You may also ask, why we use afterRenderEffect instead of effect ? The reason is that afterRenderEffect is executed only in the browser, but effect also runs on the server side. It could lead to some errors if your app supports SSR. For a deeper explanation, check out my afterRenderEffect, afterNextRender, afterEveryRender & Renderer2 article.

Core Logic Execution - Switching Page Manually

The custom bubbles live outside Angular Material’s built-in controls. That means when a user clicks a bubble, nothing in MatPaginator fires by itself, we have to connect the click to the paginator so the table (and anyone subscribed to matPag.page) reacts, hence the need for switchPage() function.

private createButton(index: number): HTMLElement {
  // our unique bubble showing a specific page - 1, 2, etc.
  const bubbleButton = this.ren.createElement('div');

  // ... some code ...

  this.ren.listen(bubbleButton, 'click', () => {
    this.switchPage(index);
  });
	
  // ... some code ...
}

private switchPage(index: number): void {
  if (!this.matPag) {
    return;
  }

  const previousPageIndex = this.matPag.pageIndex;

  // switch page index of mat paginator
  this.matPag.pageIndex = index;

  // change active button styles
  this.changeActiveButtonStyles(previousPageIndex, this.matPag.pageIndex);

  // trigger page event manually, we are changing pageIndex programmatically
  this.matPag.page.emit({
    pageIndex: index,
    pageSize: this.matPag.pageSize,
    length: this.matPag.length,
    previousPageIndex: previousPageIndex,
  });
}

The key part of is this.matPag.pageIndex = index;, keeping the paginator’s internal state in sync with witch bubble was clicked. If you were to remove this line, the pagination would stop working.

Next, since we are programmatically changing the index of the paginator (mentioned above), when you navigate though the items in the table, the this.matPag.page will not emit by itself, therefore we also need to manually emit this data.

Paginator Style Updates

Inside styleDefaultPagination, you can see we’re directly accessing the paginator’s internal HTML elements to restyle them. Is this ideal? Probably not. Since it relies on exact class selectors like button.mat-mdc-paginator-navigation-next, it’s fragile. These internal selectors can change between Material versions.

This already happened when Material v15 introduced the new MDC components, which broke custom styling for many projects using ::ng-deep. What we’re doing here is a similar kind of hack, but for our use case it’s the most practical solution available. Still, it’s worth being aware of the tradeoff.

Custom Styles

Once the logic is done, we still need some styling to make it actually look like bubbles. This part is mostly CSS (or SCSS), and you can adjust it to fit your own theme. In my example, each bubble is a simple flex container with centered text, a hover state, and an active state to highlight the current page.

/* Custom paginator styles */
.g-bubble-container {
  display: flex;
  gap: 4px;
}

.g-bubble {
  background-color: #f0f0f0;
  border-radius: 50%;
  width: 34px;
  height: 34px;
  display: flex;
  align-items: center;
  justify-content: center;
  color: #2e2e2e;
  font-size: 14px;
  cursor: pointer;
  transition: 0.3s;

  &:hover {
    background-color: #636363;
    color: orange;
  }
}

.g-bubble__active {
  background-color: #636363;
  color: orange;
}

mat-paginator {
  background: transparent !important;
  /* need mat-paginator range to align with other mat-table elements */
  position: relative;
}

/* override alignment for the labels that shows "x of y" */
.mat-mdc-paginator-range-label {
  margin: 0 !important;
}

Things to Keep in Mind

There are a few small details that are good to keep in mind:

  • Renderer2 limitations - you can’t directly use pseudo-classes (::before, ::after) from within the directive, keep your visual parts inside SCSS files
  • Changing Angular Material internals - as mentioned earlier, mat-mdc- selectors can change in future Material versions, it’s one of the risks of this directive
  • Accessibility (a11y) - since these are custom clickable divs, you might want to add role="button" and tabindex="0" attributes so users can navigate the bubbles with a keyboard. You can also listen for keydown events and simulate click behavior with the space or enter key
  • SSR / Hydration - if you’re running Angular SSR, the directive should still work fine, since Renderer2 is SSR safe and we are using afterRenderEffect to render bubbles only on the client-side
  • Active Button State - currently when we load more data into the table and re-execute the logic of rendering bubbles, we go back to the first page, so no active state was implemented yet
  • Important input [appBubblePaginationLength] to bound the table length. Without it, the afterRenderEffect will not be notified that new data have arrived and buttons will not be rebuilt

Summary

By the end of this post, you should have a working, MDC compatible paginator. The goal here wasn’t to replace Angular Material, but to show how far you can go using directives + Renderer2, to enhance an existing Material component, without creating your own custom paginator from scratch.

Of course there may be some room for improvement, it is still a simple directive, so if you have any suggestions, feel free to comment. Hope you liked this example, catch more of my articles on dev.to, connect with me on LinkedIn or check my Personal Website.

Angular v20 Custom MatPaginator Styling
]]>
<![CDATA[Gemini and Angular, Part II: Creating Generative UIs]]>Let's continue our journey into LLMs and Gemini! In the previous article, we moved beyond simple text generation and learned:

  • how to force the model to speak our language using structured outputs (JSON schemas)
  • how to connect the model to our actual code and logic using function calling
]]>
https://www.angularspace.com/gemini-and-angular-part-ii-creating-generative-uis/6971f559b7771f00016ed2b6Thu, 22 Jan 2026 13:48:08 GMT

Let's continue our journey into LLMs and Gemini! In the previous article, we moved beyond simple text generation and learned:

  • how to force the model to speak our language using structured outputs (JSON schemas)
  • how to connect the model to our actual code and logic using function calling (tool use, foundation of AI agents)
  • how to build applications that can make decisions based on user input

In fact, we even already made our first steps in the world of Generative UI by rendering a dynamic form based on a generated schema.

Note: If you haven't read the previous article, but are confident with concepts like JSON schemas and tool calling, feel free to proceed. Otherwise, I'd suggest reading the previous one here

This time, let's take a significant step forward. We are going to go beyond simple forms and bring full-blown AI capabilities directly into the user interface.

Our goals

In the last article, we saw how useful it is to generate UI elements (like that form) on the fly. However, manual implementation of such dynamic rendering can get complicated quickly. We want rich, interactive interfaces generated in real-time without the boilerplate. We also want more capabilities in terms of visualizing data and actually improving user experience.

To do this, we will dive deep into Generative UI. We will explore how to combine the power of Gemini with Nano Banana Pro, the state-of-the-art image generation model by Google to render Angular components dynamically based on the model's responses. We will do our best to write as little code as possible for the most possible return. In the meantime, we can also use this as an opportunity to learn about Angular's newest feature: signal forms, whose highly dynamic nature plays incredibly well with Generative UI scenarios.

Note: throughout this series, we often use cost-effective models like Gemini Flash to keep things accessible for learners. For production-grade Generative UI, using capable models is crucial, so always balance performance with your budget.

Let's start!

Visualizing user preferences to help make decisions

Imagine we are developing a car dealership application. We examine the user journey and realize users spend a lot of time thinking about their future car's design and outwards look. We could, of course, write a complicated 3D rendering component and let users customize colors and shapes. While that would be a great approach, it will still come with some downsides:

  • it would take a lot of time and effort to build such a component
  • it would be too limited to predefined options and styles
  • still won't allow the user to see their potential car in different environments (will my offroad car look cool when I ride in the mountains?)

Instead, we can use Nano Banana Pro to generate images of the car as the user themselves describe and want to see it. This way, we can provide a much richer experience with minimal effort. Before we proceed, let us, however, also outline the pros and cons of this approach, to be able to determine which one is more suitable for our use cases in the future.

Pros:

  • Flexibility: Users can describe any design they want, and the model can generate it.
  • Ease of Implementation: No need to build complex components to render 3D models
  • Rich Visuals: The model can generate high-quality images that can be more appealing than simple renderings.
  • Limits still apply: While the model can generate anything, it is nice that we can still constraint it to a specific domain (cars, only specific models and styles, and so on), making it easier to control outputs.

Cons:

  • Cost: Image generation models can be expensive to use, especially at scale, and especially with Nano Banana Pro, which is the frontier of image generation models as of today.
  • Latency: Generating images can take longer than rendering predefined components or 3D models, leading to potential delays in user experience.
  • Quality Control: The generated images may not always meet user expectations, leading to dissatisfaction

With this in mind, let us actually implement this feature in our Angular application, also applying signal forms!

Implementation

First, let us do the, funnily enough, easy part: asking the Gemini API to generate the image based on user's description. For this, we will add a simple Express.js endpoint to our backend.

Note: if you don't have an Express.js backend yet, you can follow the instructions from my first article of this series to get a simple Express backend up and running quickly.

Warning: Nano Banan Pro is available only for the Paid tier of Gemini API access. To access it, you will need an API key that is tied to a Google Cloud Platform account with credits of a valid credit card.


app.post('/car-image', async (req, res) => {
  const info = req.body;

  try {
    const response = await genAI.models.generateContent({
      // Nano Banana is the code name of the image mode, but the actual model name is "gemini-3-pro-image-preview"
      model: "gemini-3-pro-image-preview",
      contents: "Generate a photo of a car that adheres to these specific parameters: " + JSON.stringify(info),
      config: {
        tools: [{ googleSearch: {} }],
        imageConfig: {
          aspectRatio: "16:9",
          imageSize: "4K" // possible values are 1K, 2K, and 4K
        },
      }
    });

    const inlineData = response.candidates?.[0]?.content?.parts?.find(p => p.inlineData)?.inlineData;
    const base64String = inlineData?.data;
    const mimeType = inlineData?.mimeType;

    if (!base64String || !mimeType) {
      res.status(500).json({error: 'Failed to generate image'});
    }
    return res.json({image: `data:${mimeType};base64,${base64String}`});
  }  catch {
    res.status(500).json({error: 'Failed to process the request'});
  }
})

Note: the googleSearch tool is only available for Gemini 3 class models, so you won't be able to use it with earlier models like Gemini 2.5 Flash.

Let's do a quick breakdown of what is happening here:

  • We ask Gemini to use the image generation model (gemini-3-pro-image-preview)
  • We take user's input and add it to a very simple prompt and send it to Gemini
  • We specify some image configuration (aspect ratio and size). Careful: larger images are more costly and take longer to generate!
  • Finally, we extract the base64-encoded image from the response and send it back to the client

An important novelty here is the usage of the tools field in the config, particularly the googleSearch tool. We explored tool calling in the previous article, but here we are using it in a slightly different way. Instead of providing a tool we have in our own app, we instruct Gemini to use a built-in Google search tool, which allows the model to look up recent images and information on the web to improve the quality of the generated image. For instance, this way we won't have to pass a lot of information about what a given car model should look like - the model can simply look it up itself!

This seems fairly straightforward! Now, let us also add a method in our GenAIService to call this endpoint from Angular:

type CarImageDetails = {
    color: string;
    background: string;
    cameraAngle: string;
    make: string;
    model: string;
    year: number;
}

const BASE_URL = 'http://localhost:3000';

@Injectable({providedIn: 'root'})
export class GenAIService {
    readonly #http = inject(HttpClient);

    // other methods omitted for brevity

    generateCarImage(details: CarImageDetails) {
        return this.#http.post<{image: string}>(${BASE_URL}`/car-image`, {details});
    }
}

Now, let us move on to the implementation of the actual component's TS side logic. In order to do this, we will use rxResource for calling the image generation endpoint and managing the state of the generated image, and a simple signal form to capture user input.

@Component({/* */})
export class CarComponent {
    readonly #genAI = inject(GenAIService);
    imageDetails = signal({
        color: '',
        background: '',
        cameraAngle: '',
        make: '',
        model: '',
        year: 2000,
    });
    carMakers = ['Toyota', 'Ford', 'Honda', 'Chevrolet', 'BMW', 'Nissan', 'Tesla'] as const;
    carModelsRaw: Record<typeof this.carMakers[number], string[]> = {
        Toyota: ['Camry', 'Corolla', 'Prius'],
        Ford: ['F-150', 'Mustang', 'Explorer'],
        Honda: ['Civic', 'Accord', 'CR-V'],
        Chevrolet: ['Silverado', 'Malibu', 'Equinox'],
        BMW: ['3 Series', '5 Series', 'X5'],
        Nissan: ['Altima', 'Sentra', 'Rogue', 'Pathfinder'],
        Tesla: ['Model S', 'Model 3', 'Model X', 'Model Y'],
    };
    carModels = computed(() => {
        const make = this.imageDetails().make as typeof this.carMakers[number];
        return make ? this.carModelsRaw[make] : [];
    });
    
    form = form(this.imageDetails, path => {
        required(path.make);
        required(path.model);
        required(path.background);
    });
    generatedImage = rxResource({
        stream: () => this.#genAI.generateCarImage(this.imageDetails()),
        defaultValue: {image: ''},
    });

}

Here, we can see several quite amazing things going on that were only recently made possible in Angular thanks to signal forms and resources:

  • Car Model dropdown values are dynamically updated based on the selected Car Make using a computed signal
  • Form validation is declaratively defined using the required function in a signal form
  • The generated image state is managed using rxResource, which handles loading and error states automatically

Finally, let us implement the template for this component, which should be relatively simple:

<form>
    <div class="control">
        <label for="color">Color:</label>
        <input id="color" type="color" [field]="form.color" />
    </div>
    <div class="control">
        <label for="background">Background:</label>
        <input id="background" type="text" [field]="form.background" />
    </div>
    <div class="control">
        <label for="cameraAngle">Camera Angle:</label>
        <input id="cameraAngle" type="range" min="0" max="360" [field]="form.cameraAngle" />
    </div>
    <div class="control"> 
        <label for="make">Car Make:</label>
        <select id="make" [field]="form.make">
            <option value="" disabled selected>Select a make</option>
            @for (maker of carMakers; track maker) {
                <option [value]="make">{{ make }}</option>
            }
        </select>   
    </div>
    <div class="control">
        <label for="model">Car Model:</label>
        <select id="model" [field]="form.model" [disabled]="carModels().length === 0">
            <option value="" disabled selected>Select a model</option>
            @for (model of carModels(); track model) {
                <option [value]="model">{{ model }}</option>
            }
        </select>   
    </div>
    <div class="control">
        <label for="year">Car Year:</label>
        <input id="year" type="number" min="1900" max="2024" [field]="form.year" />
    </div>
    <button type="button" (click)="generatedImage.reload()">Generate Car Image</button>
</form>

<figure>
    <figcaption>Generated Car Image:</figcaption>
    @if (generatedImage.isLoading()) {
        <div class="loader-backdrop">
            <div class="loader"></div>
        </div>
    }
    @if (generatedImage.error()) {
        <p>Error generating image: {{generatedImage.error()}}</p>
    } 
    @if (generatedImage.hasValue() && generatedImage.value().image) {
        <img [src]="generatedImage.value().image" alt="Generated Car Image" />
    }
</figure>

As we can see, nothing complex is happening here, since most of the logic is encapsulated inside the signal form and the resource. We simply bind the form controls to the signal form and display the generated image based on the resource's state.

Tip: if you're not fully familiar with Angular Resources, I recommend reading on of my past articles on the topic here. If you are not caught up with signal forms yet, check out this fantastic article from Manfred Steyer: All About Angular’s New Signal Forms, or take a look at two of my recent livestreams where I build with signal forms: Part 1 and Part 2. Alternatively, you can just read the official documentation here for a quick catching-up.

Now, before we move on, let's quickly see how this component worked out. I live in Armenia and drive a 2008 Nissan Pathfinder, often in the mountains, so maybe let us try and generate a hypothetical image of my car :)

Gemini and Angular, Part II: Creating Generative UIs

Wow, looks and works pretty well! Let us now move on to a more complex scenario.

Step-by-step UI generation

Let's think about a consumer scenario that we all find ourselves in quite often: we have an everyday issue (the car won't start, the milk smells bad), we open an LLM Chat like Gemini, ask for help. We then get hit with a wall of text that contains lots of steps and also clarifying questions. We want to answer the questions to get a better response, but some of them are ambiguous, or we are not sure about options. In the end, we want short snippets of steps to do, and start doing more and more prompting, but might still end up with a disappointing result.

So, maybe let's solve this issue by creating a highly dynamic UI where the LLM can ask clarifying questions, which will be presented as form controls (with dropdown options when necessary!) that the user will fill in, and then be given the final steps as UI cards to solve their issue.

Sounds like a great case for structured outputs and good old prompting! Let's implement this.

Implementation

First, of course, we need to define a new Express.js endpoint that will handle our step-by-step UI generation. This one is more complex than the previous, so we will go through it step by step. First, let us talk about what sort of response we want to expect from Gemini. We will use structured outputs to define a schema that contains two main parts:

const schema = {
    "type": "object",
    "properties": {
        "steps": {
            "type": "array",
            "items": {
                "type": "object",
                "properties": {
                "title": {
                    "type": "string"
                },
                "text": {
                    "type": "string"
                }
                },
                "propertyOrdering": [
                    "title",
                    "text"
                ],
                "required": [
                    "title",
                    "text"
                ]
            }
        },
        "form": {
            "type": "array",
            "items": {
                "type": "object",
                "properties": {
                "type": {
                    "type": "string",
                    "enum": [
                        "text",
                        "select",
                        "number"
                    ]
                },
                "options": {
                    "type": "array",
                    "items": {
                        "type": "string"
                    }
                },
                "question": {
                    "type": "string"
                }
                },
                "propertyOrdering": [
                    "type",
                    "options",
                    "question"
                ],
                "required": [
                    "type",
                    "options",
                    "question"
                ]
            }
            }
        },
        "propertyOrdering": [
            "steps",
            "form"
        ]
    }
}

If this looks intimidating, it might be way easier to look at the TypeScript type that corresponds to this schema:

export type ControlFieldType = 'text' | 'select' | 'number';

export type ControlSchema = {
  type: ControlFieldType;
  options?: string[];
  question: string;
}

export type Step = {
  title: string;
  text: string;
}

type SchemaResponse = {
  steps?: Step[];
  form?: ControlSchema[];
}

As we can see, we either expect Gemini to instruct us to show controls with questions for the user if any new info is necessary, or give us concrete steps to solve the issue if the user input is sufficient. So, here's how the endpoint will look like:


 app.post('/fix-it', async (req, res) => {
  const { query, additionalInfo } = req.body;
  
  const prompt = `The user will provide an issue they are facing in their day-to-day life. You task is to find a solution and present it in actionable steps. If additional information is not provided, and knowing that additional information will help find the solution steps, return a list of items the user has to respond to. Those will be presented to the user as UI elements like dropdowns or inputs where they will input necessary information for you to provide a solution. User query: \n\n${query}, additional info ${additionalInfo ? JSON.stringify(additionalInfo) : 'not provided'}`;

  try {
    const response = await genAI.models.generateContent({
      model: "gemini-3-pro-preview",
      contents: prompt,
      config: {
        tools: [{ googleSearch: {} }],
        responseMimeType: 'application/json',
        responseJsonSchema: schema
      }
    });
  
    res.json(JSON.parse(response.candidates[0].content.parts[0].text));
  } catch {
    res.status(500).json({error: 'Failed to process the request'});
  }
});

As we can see, we leaned way harder into prompt engineering here, providing additional context when it is available, and using the same Google Search tool to help the model find relevant information on the web if necessary. We also again strictly required a structured JSON response based on our schema. Let's quickly add a new method to our GenAIService to call this endpoint:


const BASE_URL = 'http://localhost:3000';

@Injectable({providedIn: 'root'})
export class GenAIService {
    readonly #http = inject(HttpClient);

    // other methods omitted for brevity

    solveIssue(
        data: {query: string, additionalInfo?: Record<string, string>}
    ) {
        return this.#http.post<{form?: ControlSchema[], steps?: Step[]}>(`${BASE_URL}/fix-it`, data)
    }
}

Now, let's stop for a moment and think about what we did here: instead of two endpoints, one for clarifying additional information, and one for the actually answering the question, we created a single endpoint that can do both! While this helps us keep our codebase slightly cleaner, and maybe avoids scenarios where the model still asks for more information even when the user provided everything within their query, it also means we will have to handle more complex logic on the client side. This approach in no way is "better" than the two-endpoint approach, so make sure to evaluate your use case and choose accordingly.

Now, to have our actual UI, it would be best for us to split it into three components: one that receives the form schema and renders the necessary controls, one that receives the steps and displays them as cards, and one parent component that manages the state and orchestrates calls to the backend. Let's start with the form component:

@Component({
  selector: 'app-dynamic-form',
  standalone: true,
  imports: [Field],
  template: `
    @if (schema().length > 0) {

      <form class="dynamic-form">
        @for (control of schema(); track $index) {
          <div class="form-field">
            <label [for]="control.question" class="form-label">{{ control.question }}</label>
            
            @switch (control.type) {
              @case ('text') {
                <input 
                      [field]="$any(dynamicForm)[control.question]" 
                      [id]="control.question" 
                      type="text" 
                      class="form-input">
              }
              @case ('number') {
                <input 
                      [field]="$any(dynamicForm)[control.question]" 
                      [id]="control.question" 
                      type="number" 
                      class="form-input">
              }
              @case ('select') {
                <select 
                        [field]="$any(dynamicForm)[control.question]" 
                        [id]="control.question" 
                        class="form-select">
                        @for (opt of control.options; track $index) {
                          <option [value]="opt">
                            {{ opt }}
                          </option>
                        }
                </select>
              }
            }
          </div>
        }
        <button type="button" class="primary-btn" (click)="onSubmit()">Submit</button>
      </form>
    }
  `,
})
export class DynamicFormComponent {
  schema = input.required<ControlSchema[]>();
  submit = output<Record<string, string>>();

  formValue = linkedSignal(() => {
    const s = this.schema();
    const group: Record<string, any> = {};
    s.forEach(c => group[c.question] = '');
    return group;
  });

  dynamicForm = form(this.formValue);

  onSubmit() {
    const additionalInfo = this.dynamicForm().value()
    this.submit.emit(additionalInfo);
  }
}

Here, the interesting part is actually the linkedSignal, which is dynamically created from the input schema, but is still mutable by itself unlike a computed, so then we can easily pass it to the signal form. The rest is fairly straightforward dynamic form rendering. Also please note that we use $any in the template here a lot, simply driven by the fully dynamic nature of this component (we have no idea what inputs the LLM might make us render here).

After the user fills in, they click the button and we emit the filled-in values to the parent component. Next, let us implement the steps component:

@Component({
  selector: 'app-cards-stepper',
  standalone: true,
  template: `
    @if (steps().length > 0) {
      <div class="stepper-container">
        <button class="nav-arrow prev" (click)="prev()" [disabled]="currentIndex() === 0" aria-label="Previous step">
          <span aria-hidden="true">&lt;</span>
        </button>

        <div class="steps-wrapper">
          @if (hasPrevious()) {
            <div class="step-card previous" (click)="prev()">
              <div class="step-content">
                <h3 class="step-title">{{ steps()[currentIndex() - 1].title }}</h3>
                <p class="step-text">{{ steps()[currentIndex() - 1].text }}</p>
              </div>
            </div>
          }

          @if (steps()[currentIndex()]) {
            <div class="step-card current">
              <div class="step-content">
                <h3 class="step-title">{{ steps()[currentIndex()].title }}</h3>
                <p class="step-text">{{ steps()[currentIndex()].text }}</p>
              </div>
            </div>
          }

          @if (hasNext()) {
            <div class="step-card next" (click)="next()">
              <div class="step-content">
                <h3 class="step-title">{{ steps()[currentIndex() + 1].title }}</h3>
                <p class="step-text">{{ steps()[currentIndex() + 1].text }}</p>
              </div>
            </div>
          }
        </div>

        <button class="nav-arrow next" (click)="next()" [disabled]="currentIndex() === steps().length - 1" aria-label="Next step">
          <span aria-hidden="true">&gt;</span>
        </button>
      </div>
    }
  `,
})
export class CardsStepperComponent {
  steps = input<Step[]>([]);
  stepChange = output<number>();

  currentIndex = signal(0);

  hasPrevious = computed(() => this.currentIndex() > 0);
  hasNext = computed(() => this.currentIndex() < this.steps().length - 1);

  prev() {
    if (this.hasPrevious()) {
      this.currentIndex.update(i => i - 1);
      this.stepChange.emit(this.currentIndex());
    }
  }

  next() {
    if (this.hasNext()) {
      this.currentIndex.update(i => i + 1);
      this.stepChange.emit(this.currentIndex());
    }
  }
}

While this component might seem a bit complex, it is actually quite straightforward: we simply display the current step, and if available, the previous and next steps as cards. The user can navigate between steps using arrows or by clicking on the cards themselves. Finally, let us implement the parent component that will orchestrate everything, here is the actually interesting logic going on:

@Component({
    selector: 'app-fix-it',
    template: `
        <div class="fix-it-container">
            <h2 class="title">Fix your issue</h2>
            <div class="input-section">
                <label for="query" class="sr-only">Describe your issue</label>
                <textarea 
                    id="query" 
                    [field]="form.query" 
                    rows="4" 
                    class="form-input query-input"
                    placeholder="Describe your issue..."></textarea>
                <button type="button" class="primary-btn" (click)="result.reload()">Get Fixes</button>
            </div>

            @if (result.isLoading()) {
                <div class="loading-backdrop">
                    <div class="loader"></div>
                </div>
            }

            @if (result.hasValue() && result.value()) {
                <div class="results-section">
                    @if (result.value().form) {
                        <app-dynamic-form
                            [schema]="result.value().form!"
                            (submit)="resubmit($event)"
                            />
                    }

                    @if (result.value().steps) {
                        <app-cards-stepper [steps]="result.value().steps!"></app-cards-stepper>
                    }
                </div>
            }
        </div>
    `,
    imports: [DynamicFormComponent, CardsStepperComponent, Field]
})
export class FixItComponent {
    readonly #genAI = inject(GenAIService);
    controls = signal<{
        query: string, additionalInfo?: Record<string, string>
    }>({query: ''});

    form = form(this.controls, path => {
        required(path.query);
    });

    result = rxResource({
        stream: () => {
            const {query, additionalInfo} = this.form().value();

            if (this.form().invalid()) {
                return of(undefined)
            }

            return this.#genAI.solveIssue({query, additionalInfo});
        }
    });

    resubmit(additionalInfo: Record<string, string>) {
        this.controls.update(c => ({...c, additionalInfo}));
        this.result.reload();
    }
}

here we have a simple form with one input for user's query, then we trigger a resource reload with out Gemini API call. We might either get the steps or a form schema in response, and we render the corresponding component accordingly. If we get a form schema, we also pass a handler to capture the submitted additional info and trigger another reload with the new info. Angular signals, forms, and resources make this incredibly easy to implement! So, as with the previous example, here is how this component works in practice:

Gemini and Angular, Part II: Creating Generative UIs

And we are done!

Conclusion

In this article, we deepened our understanding of Generative UI by combining Gemini's text generation capabilities with Nano Banana Pro's image generation power, meaning we officially made our first steps into multimodality. Creating generative UIs is part and parcel of building AI-powered applications, and in my own opinion GenUI-s will become more and more prominent and widespread as the entire field progresses.

In the next article, we will explore embeddings - a powerful concept that allows us to build so much more than just generative experiences, but rather incorporate semantic searches, recommendations, and knowledge-based applications like RAGs (retrieval-augmented generation). Stay tuned!

Small Promotion

Gemini and Angular, Part II: Creating Generative UIs
My book, Modern Angular, is now in print! I spent a lot of time writing about every single new Angular feature from v12-v18, including enhanced dependency injection, RxJS interop, Signals, SSR, Zoneless, and way more.

If you work with a legacy project, I believe my book will be useful to you in catching up with everything new and exciting that our favorite framework has to offer. Check it out here: https://www.manning.com/books/modern-angular

P.S There is one chapter in my book that helps you work LLMs in the context of Angular apps; that chapter is already kind of outdated, despite the book being published just earlier this year (see how insanely fast-paced the AI landscape is?!). I hope you can forgive me ;)

Gemini and Angular, Part II: Creating Generative UIs
Gemini and Angular, Part II: Creating Generative UIs ]]>
<![CDATA[Signal Forms]]>Experimental - This is intended for use in non-production applications, as the API can change (without notice) as it did with 21.0.0-next.8 where Control was renamed to Field.

Signal Forms is one of the most talked-about features to be added to the Angular framework, and we recently

]]>
https://www.angularspace.com/signal-forms/690867f563641b00017b0291Mon, 19 Jan 2026 12:39:14 GMT

Experimental - This is intended for use in non-production applications, as the API can change (without notice) as it did with 21.0.0-next.8 where Control was renamed to Field.

Signal Forms is one of the most talked-about features to be added to the Angular framework, and we recently got to see an experimental version of this with the recent beta release of Angular version 21. Signal Forms will be a game-changer when it comes to developing forms. Not only does it simplify form creation by removing a significant amount of boilerplate code that Template and Reactive Forms require, but it has also made form validation, form submission and creating custom controls significantly easier.

Signal Forms are model driven, we create a signal model and then pass this to the form function as an argument.

protected readonly userProfile = signal<UserProfile>({
    // model properties
})

For example, in the past when creating custom controls that will be used in our forms, we've had to implement the ControlValueAccessor interface to allow our custom controls to integrate with the Forms or Reactive Forms Module, the good news is this has been greatly simplified as well, along with a few other issues we faced when creating custom controls, which we'll look at in this post.

Let's walk through setting up a Signal Form, we start off by creating our model.

type UserProfile = {
    firstName:string;
    lastName:string;
    phone:string;
    email:string;
}

Next, let's build our form, we create a signal of our model and then using the new form function to create a wrapper around our model.

export class User {
    protected readonly userProfile = signal<UserProfile>({
    	firstName:'',
        lastName:'',
    	phone:'',
    	email:'',       
    })
    
    protected readonly userForm = form(this.userProfile);
}

That's all we need to create a Signal Forms - how easy was that compared to Template or Reactive forms.

Signal Forms are represented as a FieldTreeand FieldState this is a hierarchical structure of our form and looks like this.

// our model
type UserProfile = {
    firstName:string;
    lastName:string;
    phone:string;
    email:string;
}

// user (FieldTree root)
//  ├─ firstName (FieldState)
//  ├─ lastName (FieldState)
//  ├─ phone (FieldState)
//  └─ email (FieldState)

If we had a nested model it would be represented like this:

// our model
type userProfile = {
  firstName: string;
  lastName: string;
  phone: string;
  email: string;
  address: {
    street: string;
    city: string;
  }
}

// user (FieldTree root)
//  ├─ firstName (FieldState)
//  ├─ lastName (FieldState)
//  ├─ phone (FieldState)
//  ├─ email (FieldState)
//  └─ address (FieldTree node)
//       ├─ street (FieldState)
//       └─ city (FieldState)

The main difference of Signal Forms compared to Template or Reactive forms is that Signal Forms doesn't maintain a copy of the data, so when we update a FieldState in the tree, we are directly mutating the original model.

A FieldState represents an individual form field including it's state (value, validity, dirty status etc...).

Now, let's have a look at connecting our input components to our new form. I'm using Angular Material in these examples.

<mat-form-field>
      <mat-label for="firstName">First name</mat-label>
      <input
        [field]="userForm.firstName"
        id="firstName"
        matInput
        type="text"
        placeholder="First name"
      />
</mat-form-field>

To connect our model and template, we need to use the new [field] method, passing in a field that we want this input to be bound to. This is nice and simple, and we get two-way data binding out of the box.

In version 21.0.0-next.8 [control] was renamed to [field].

// for reference - the array for the dropdown to iterate over
// address: [] = [
//     { value: '0', viewValue: 'Primary' },
//     { value: '1', viewValue: 'Billing' },
//     { value: '2', viewValue: 'Shipping' },
//   ];

<mat-form-field>
    <mat-label for="address">Address</mat-label>
    <mat-select id="address" [field]="userForm.address">
      @for (addr of address ; track address.value() ) {
      <mat-option [value]="addr.value">
          {{addr.viewValue}}
      </mat-option>
      }
    </mat-select>
</mat-form-field>

Setting up a drop-down list is just as easy.

Let's now look at validation. The form function takes a second parameter, which can be a schema or a function, or form options (if a schema is passed as the second argument, the form options can be passed in as a third option).

// recap what our model looks like
type UserProfile = {
    firstName:string;
    lastName:string;
    phone:string;
    email:string;
}

protected readonly userForm = form(this.userProfile, (path)=>{
    required(path.firstName),
    required(path.lastName),
    email(path.email)
});

This second parameter is a function and takes a fieldPath as an argument, in this function we set up our validation.

The ordering in which validation is applied doesn't matter.

The built-in validation is now imported from forms/signals and we have a similar list two what we have in Template or Reactive Forms:

  • Email
  • Max
  • MaxLength
  • Min
  • MinLength
  • Pattern
  • Required

With the validation, we set the path to the fieldState we want the validation applied to. In the HTML we just need to iterate over the errors object.

 <mat-form-field>
      <mat-label for="email">Email address</mat-label>
      <input
        id="email"
        type="email"
        matInput
        [field]="profileForm.email"
        placeholder="Email"
        required
      />

      @if(profileForm.email().errors().length > 0) {
      <mat-error>
        @for(error of profileForm.email().errors(); track error) {
        <div>
        	Error message goes here...    
        </div>
        }
     </mat-error>
    }
 </mat-form-field>

Adding error messages like this could get a bit long with multiple error messages, so to help with this, we can add a message to the form like this:

// recap what our model looks like
type UserProfile = {
    firstName:string;
    lastName:string;
    phone:string;
    email:string;
}

protected readonly userForm = form(this.userProfile, (path)=>{
    required(path.firstName, {message: 'This is a required field.'}),
    required(path.lastName, {message: 'This is a required field.'}),
    email(path.email, {message: 'The email address is not valid.'})
});

And in the HTML we can read the error.message like this:

@if(profileForm.email().errors().length > 0) {
   <mat-error>
        @for(error of profileForm.email().errors(); track error) {
           <span>{{ error.message }}</span>
        }
   </mat-error>
}

Because I'm using Angular Material (and this might just because it's still experimental at present), but to get the mat-error to work and display the error message correctly under the input I had to wrap an @if block around the mat-error, hopefully this will not be the case in later releases but is necessary at the moment.

I don't like repeating code unnecessarily, and the validation we have just created repeats (albeit, just twice in our example), but imagine if you have three, six, nine, or more controls; the that's going to be repeated a lot. Luckily, there is another method that we can use to remove the duplication. For this, we need to create a schema, which can then be applied to our form. Let's adjust our code and create a schema.

const profileSchema: Schema<string> = schema((path) => {
  required(path, { message: 'This is a required field.' });
  minLength(path, 3, { message: 'This needs to be more than three characters'});
});
protected readonly userForm = form(this.userProfile, (path)=>{
   apply(path.firstName, profileSchema);
   apply(path.lastName, profileSchema );
   email(path.email, {message: 'The email address is not valid.'})
});

We use the apply function to apply our schema to the fields we want to validate. This will apply both required and minLength to the firstName, and we can create multiple schemas as necessary, for example we only want minLength validation on certain controls.

Custom validation

We can also create custom validators as well, let's create a custom validator that makes the phone number control contains numbers only. We need to create a function that takes a path and an optional options

export function numericOnly(
  path: FieldPath<string>,
  options?: { message?: string }
): void {
 
  validate(path, (ctx) => {
    const value = ctx.value();

    if (!/^\d+$/.test(String(value))) {
      return customError({
        kind: 'phone',
        value: true,
        message: options?.message || 'Phone must contain only numbers.',
      });
    }
    return customError({
      kind: 'phone',
      value,
    });
  });
protected readonly userForm = form(this.userProfile, (path)=>{
   // other validators
   numericOnly(path.phone);
});

Conditional validation

Signal Forms has you covered for that as well. Let's adjust our model, lets add a emailMarketing flag to our model, so that the validation for email is only applied if the emailMarketing checkbox is ticked (true).

type UserProfile = {
    firstName:string;
    lastName:string;
    phone:string;
    email:string;
    emailMarketing: boolean;
}

protected readonly userForm = form(this.userProfile, (path)=>{
   required(path.email, { 
       when: ({ valueOf }) => valueOf(path.emailMarketing) === true, 
       message: 'This is a required field.',
   });
   email(path.email, {message: 'The email address is not valid.'})
});

We'll add a required validator and set a path to email, in the configuration options there is a when property and we can use this to check the valueOf another field, in our case we what to apply the validation when the emailMarketing checkbox is ticked.

If you have applied a required validation schema you will need to remove this as it will also be applied.

Form Submission

When it comes to submitting our form to the server, we have a new function called... (you've guessed it) submit. This function takes two arguments: the first is our form and the second is a function that returns a promise or undefined if the save to our back end is successful. If it's not successful, we return an array of objects, and within this object, we can set the kind of error here; we're specifying server , if we want to attach the error to a specific control, we specify the field; and finally, we set the error message to be displayed.

 onSubmit() {
    submit(this.userProfile, async (form) => {
      try {
        this.userProfileService.saveForm(form); // call to API to save our form data
        this.userProfile().reset(); 
        return undefined;
      } catch (error) {
        return [
          {
            kind: 'server',
            field: this.profileForm.firstName,
            message: (error as Error).message,
          },
        ];
      }
    });
  }

Calling the .reset() on the form only resets the pristine, dirty and touched to reset the form values after submitting reset the model values.

When submitting a form, we usually disable the save button while this operation takes place. The form() function exposes a submitting() signal that we can use to disable our save button.

<button
  [disabled]="!profileForm().valid() || profileForm().submitting()"
  (click)="submit()">
  matFab
  extended
  class="toggle-btn"
  type="button"
Save
</button>

In Reactive forms we would set up our forms like: <form (ngSubmit)="submit($event)"> ... </form> at present this isn't fully fleshed out in Signal Forms, there is a some information on the road-map about it and possible solutions.

Custom controls

When creating custom controls we no longer need to implement the ControlValueAccessor interface, we now have a simpler new interface, the good news is we only need to implement one property and not four methods as before, this new interface is called FormValueControl<> and we need to set the value property our component (and it must be called value), which must be a model(). Let's create an example:

// our component 
import { Component, model } from '@angular/core';
import { FormValueControl } from '@angular/forms';
import { MatIconModule } from '@angular/material/icon';

@Component({
  selector: 'star-rating',
  imports: [MatIconModule],
  template: `
  @if(required()){
  	<span class="required-asterisk">*</span>
  }
    <div class="star-rating">
      @for (star of stars; track $index) {
        <mat-icon 
          class="star"
          [class.filled]="star <= value()"
          (click)="setRating(star)"
          (mouseenter)="!disabled() && (hoverRating = star)"
          (mouseleave)="hoverRating = 0">
          {{ (hoverRating >= star || value() >= star) ? 'star' : 'star_border' }}
        </mat-icon>
      }
    </div>
  `,
})
export class StarRatingComponent implements FormValueControl<number> {
  value = model(0);
  disabled = input(false);
  required = input(false);
    
  stars = [1, 2, 3, 4, 5];
  hoverRating = 0;
  
  setRating(rating: number) {
    if (!this.disabled()) {
      this.value.set(rating);
    }
  }
}

With the new FormValueControl interface, we just need to include the value property in our component; it also has to be of type modelSignal, and that's all we need to set. If we look at the FormValueControl interface we can see that it extends FormUiControl this provides many optional properties for instance, required and disabled and for our component to make use of these we just need to include them in our component and in the parent component we just need to set the states in the form.

<star-rating [field]="form.starRating" />
type ProductProfile = {
    name:string;
    description:string;
    price:number;
    rating:number;
    leaveReview: boolean;
}

protected readonly productProfile = signal<ProductProfile>({
   	name:'',
    description:'',
   	price:0,
   	starRating:0,
    leaveReview: false
})

protected readonly productForm = form(this.productProfile, (path) => {
 	required(path.starRating),
    disabled(path.starRating,({valueOf})=> valueOf(path.leaveReview) === true)
 });

This is all we need to have the required and disabled properties for our form to work as we'd expect, the starRating is in a disabled state until the leaveReview is set to true.

Conclusion

Signal Forms is going to be a game-changer for creating forms in Angular applications when it's released. Hopefully, this post has given some insight into how to use it. I've been really impressed by how complete it is (even though it's currently experimental), and over the next few weeks and months, this will only get better.

Signal Forms
Signal Forms ]]>
<![CDATA[Gemini and Angular, Part II: Structured Outputs and Tool calls]]>Let's continue our journey into LLMs and Gemini! In the previous article, we learned

  • how LLMs generate text, what are tokens, what configuration parameters like temperature, topP and so on mean
  • how to create a Google Cloud project, get a Gemini API key, and use the SDK to
]]>
https://www.angularspace.com/gemini-and-angular-part-ii-structured-outputs-and-tool-calls/6901bb5163641b00017b0265Tue, 04 Nov 2025 13:42:00 GMT

Let's continue our journey into LLMs and Gemini! In the previous article, we learned

  • how LLMs generate text, what are tokens, what configuration parameters like temperature, topP and so on mean
  • how to create a Google Cloud project, get a Gemini API key, and use the SDK to make text generation requests
  • how to leverage the different models for different tasks

Note: If you haven't read the previous article, but are confident of the knowledge you already have in regards to the topics mentioned above, feel free to proceed reading this one. Otherwise I'd suggest to first read the previous one here

This time, let's move forward and build even more complicated things, while still requiring only marginally more knowledge outside of what a typical Angular developer will possess.

Our goals

After being able to actually generate text in response to a prompt, we might be tempted to jump on and try to "build a chatbot". However, as exciting as it is, one of our goals with these articles will be to show how much more LLMs are actually capable of, rather than simply being "engines for chatbots".

To do this, we will, in depth, learn about two important concepts: structured outputs and function calling (also known as tool use). With these tools, we can build applications that can make AI-powered decisions affecting the UI and UX of our applications, and lay the foundation for our future explorations of a powerful frontend + LLM approach known as generative UI.

Note: throughout this and other articles of this series, slightly older models like Gemini Flash 2.0 Flash Lite are used, with the sole goal of reducing costs for readers/learners. For better results it is recommended to use the latest models available in the Gemini family, however, of course, taking into consideration the cost implications.

Let's start!

Structured outputs

As always, I believe it is best to start with a task at hand, and then see how the tools we are going to explore will fit into solving that task. Let's build a writing assistant, that will help writers come up with better wording for certain paragraphs.

First of all, let us build a simple Angular component, which will present the user with two textarea inputs, and a button. The user will input two versions of the same paragraph, and the AI will help them pick the better one. When the user clicks the button, we will send both paragraphs to the Gemini API, and ask it to provide an overview of which one is better and why, then display that overview in a separate box.

@Component({
  selector: 'app-writing-assistant',
  template: `
    <div>
      <h2>Writing Assistant</h2>
      <textarea [(ngModel)]="paragraph1" placeholder="Enter first paragraph"></textarea>
      <textarea [(ngModel)]="paragraph2" placeholder="Enter second paragraph"></textarea>
      <button (click)="overview.reload()">Compare</button>
      @if (overview.hasValue()) {
          <div>
            <h3>Comparison Result:</h3>
            <p>{{ overview.value() }}</p>
          </div>
      }
    </div>
  `,
  imports: [FormsModule],
})
export class WritingAssistantComponent {
  readonly #genAI = inject(GenAIService);
  paragraph1 = signal('');
  paragraph2 = signal('');
  overview = rxResource({
    stream: () => {
      if (!this.paragraph1() || !this.paragraph2()) {
        return of('');
      }
      return this.#genAI.writingOverview(this.paragraph1(), this.paragraph2());
    },
  });
}

As we can see, a new method named writingOverview is being called on the GenAIService. To implement it, we do not actually need a new endpoint on our simple backend, since we already implemented a generic /generate-text endpoint. All we need to do is to call the method in the service providing the two paragraphs and a prompt.

writingOverview(paragraph1: string, paragraph2: string) {
    const prompt = `Provide a brief overview comparing the following two paragraphs:\n\nParagraph 1: ${paragraph1}\n\nParagraph 2: ${paragraph2}. Decide which version is better\n\nOverview:`;
    return this.generateContent(prompt);
}

Now, when we see the two inputs, we can put "Hello, how are you?" and "hlo hwo aer yo" to test a very clear cut example, and then, in our overview box, we can see something like the following:

**Overview:** Paragraph 1 is a standard, grammatically correct greeting. Paragraph 2 is a heavily abbreviated and misspelled version of the same greeting. **Decision:** Paragraph 1 is significantly better due to its clarity, proper grammar, and readability. Paragraph 2 is difficult to understand and would be considered unprofessional in most contexts.

This is absolutely great! But what if, instead of just showing a response text, we wanted to actually change our UI in accordance to the response? Like, it would be nice from the UX perspective if we could highlight the better paragraph in green, and the worse one in red. But how do we achieve it? I mean, from the text we got, it is pretty obvious that paragraph 1 is the better choice, but how do we translate it into code? Here's where structured outputs come into play.

It would surely be great if instead of plain text, Gemini returned as a JSON of, for example, this form:

{
  "betterParagraph": 1,
  "reason": "Paragraph 1 is significantly better due to its clarity, proper grammar, and readability. Paragraph 2 is difficult to understand and would be considered unprofessional in most contexts."
}

But how do we make it work like this? Surely, we can write our prompt in a way that it asks for a JSON response. However, we might quickly become quite frustrated, since the model will often start the message with "parasite" words like "Sure. here;s your JSON:", or it will simply return a plain text response, or it will return a malformed JSON. Fighting this with prompt engineering might yield some results, but is not worth the time, and might not be fully reliable anyway.

Instead, we can use a feature of the Gemini API called structured outputs. It allows us to define a schema for the response we expect, and the model will guarantee that the response will be in the correct format! To do that, we need to do the following:

  • define a new endpoint to call Gemini
  • provide a response type ("application/json" for our use case)
  • provide the schema (what fields we expect, and what types they are)

Here's a simple implementation:

app.post('/writing-assistant', async (req, res) => {
  const { paragraph1, paragraph2 } = req.body;
  const schema = {
    "type": "object",
    "properties": {
      "overview": { "type": "string" },
      "bestChoice": {
        "type": "string",
        "enum": ["1", "2"]
      }
    },
    "required": ["overview", "bestChoice"]
  };
  const prompt = `Provide a brief overview comparing the following two paragraphs:\n\nParagraph 1: ${paragraph1}\n\nParagraph 2: ${paragraph2}. Decide which version is better\n\nOverview:`;
  try {
    const response = await genAI.models.generateContent({
      model: 'gemini-2.0-flash-lite',
      contents: prompt,
      config: {
        responseMimeType: 'application/json',
        temperature: 0.1,
        responseSchema: schema,
      },
    });
    res.json(response.candidates[0]?.content.parts[0].text || {});
  } catch (error) {
    console.error('Error generating writing overview:', error);
    res.status(500).json({ error: 'Failed to generate writing overview' });
  }
});

As we can see, most of it is the same boilerplate as previously, with two slight differences:

  1. We defined a schema object, which describes the structure of the response we expect
  2. In the config object, we provided two new properties: responseMimeType, which we set to "application/json", and responseSchema, which we set to the schema we defined.

The schema object is pretty straightforward. It is a standard JSON schema, which you can learn more about here. In our case, we expect an object with two properties: overview, which is a string, and bestChoice, which is also a string, but can only be one of two values: "1" or "2". Both properties are required.

Now, we can modify out Angular service to call this new endpoint:

writingOverview(paragraph1: string, paragraph2: string) {
    return this.#http.post<
    GeminiResponse
    >('http://localhost:3000/writing-assistant', {paragraph1, paragraph2});
}

As we can see, the only difference is that instead of simply extracting the text from the response, we parse it as JSON before returning from the backend. This is of course made possible by the structured output we used to instruct Gemini on how to generate the response.

Finally, we can modify our component to use the structured response to change the UI:

@Component({
  selector: 'app-writing-assistant',
  template: `
    <div>
      <h2>Writing Assistant</h2>
       @let value = overview.value();
      <textarea 
        [(ngModel)]="paragraph1" placeholder="Enter first paragraph" 
        [class.better]="value?.bestChoice === '1'" 
        [class.worse]="value?.bestChoice === '2'"></textarea>
      <textarea 
        [(ngModel)]="paragraph2" placeholder="Enter second paragraph" 
        [class.better]="value?.bestChoice === '2'" 
        [class.worse]="value?.bestChoice === '1'"></textarea>
      <button (click)="overview.reload()">Compare</button>
      @if (value.overview) {
          <div>
            <h3>Comparison Result:</h3>
            <p>{{ value.overview }}</p>
          </div>
      }
    </div>
  `,
  styles: `
    .better {
      border: 2px solid green;
    }
    .worse {
      border: 2px solid red;
    }
  `,
  imports: [FormsModule],
})
export class WritingAssistantComponent {
  /* rest of the component code remains unchanged */
}

Note: if you're curious about how LLMs are capable of generating structured outputs so precisely, watch this YouTube video for a very detailed explanation; however, do not worry if you do not understand all the nuances, it is not required to know how structured outputs work to effectively use them

Now we do not only display the overview, but also highlight the better paragraph in green, and the worse one in red! Absolutely amazing and what an intro to both structured outputs and generative UIs!

Of course, this example was quite simplistic. Let's drive the point about usefulness of structured outputs even further, since we can achieve quite spectacular things with LLMs that produce coherent responses and decisions. Next, let's try to build a dynamic form.

Advanced features with structured outputs

Have you ever created a custom Google form, perhaps to collect some feedback, or maybe acquire information about potential participants of an event? If you did, the next part is going to be familiar, yet exciting. We are going to build an interface which allows the user to add questions, provide a format for answers (for simplicity we will have "input", "textarea", and "dropdown" with options, but this can easily be expanded), and also see a live preview of the form - not simply an image, but an actual form the creator can play around with and edit.

To further make this interesting, we will use Angular signal forms, which are poised to enter the scene in v21, and make it extremely simple to create a dynamic form. Finally, our goal is to simplify the process of creating a form by allowing the user to input a description of what they want the form to be about, and then have Gemini generate the questions for them! So, for instance, they might type something like "I want to create a feedback form for my new product", and Gemini will generate a set of questions that would be appropriate for such a form, and we will display the form as it is directly on the same page.

Note: if you are unfamiliar with Angular signal forms, I suggest you read this article by Manfred Steyer first, or watch me live code with signal forms here.

To achieve this, we will need to do the following:

  • a data type that describes what a custom form looks like
  • a prompt to generate a form
  • a schema based on the data type that will be used for structured output
  • a linked signal derived from the structured output of Gemini
  • a form created from that linked signal

Let's do it step by step.

First, let's define a simple data type that describes what a custom form looks like. Since we can have multiple fields, it is reasonable to think of every field descriptor as an object, and the entire form as an array of such objects. Here's what our object might look like:

export interface FormField {
  name: string;
  type: 'input' | 'textarea' | 'dropdown';
  options?: {value: string, label: string}[]; // only for dropdown
}

Very good, now, we need to define a schema for this object. We remember how to do it from the previous example, however, it can be quite cumbersome, and we might inadvertently make some mistakes. So, instead, what we are going to do is head over to Google AI Studio, toggle "Structured Output" on, and click "Edit". In the open popup, we can click on "Visual Editor" and start adding our properties and defining values! We can add properties, define their types, add enum values for strings, and more. Here's what our schema will look like visually:

Gemini and Angular, Part II: Structured Outputs and Tool calls

As we can see, all the fields are defined, so what is left is to switch to "Code Editor" and simply copy paste the generated schema into our backend code:

app.post('/generate-form', async (req, res) => {
  const { prompt: query } = req.body;
  if (!query) {
    return res.status(400).json({ error: 'Prompt is required' });
  }
  const prompt = `User will provide a description of a generic form, and you will generate the blueprint. Include only the fields required to add data, do not include buttons. ${query}`;
  const schema = {
  "type": "object",
  "properties": {
    "form": {
      "type": "array",
      "items": {
        "type": "object",
        "properties": {
          "name": {
            "type": "string"
          },
          "type": {
            "type": "string",
            "enum": [
              "input",
              "textarea",
              "dropdown"
            ]
          },
          "options": {
            "type": "array",
            "items": {
              "type": "object",
              "properties": {
                "value": {
                  "type": "string"
                },
                "label": {
                  "type": "string"
                }
              },
              "propertyOrdering": [
                "value",
                "label"
              ],
              "required": [
                "value",
                "label"
              ]
            }
          }
        },
        "propertyOrdering": [
          "name",
          "type",
          "options"
        ],
        "required": [
          "name",
          "type"
        ]
      }
    }
  },
  "propertyOrdering": [
    "form"
  ],
  "required": [
    "form"
  ]
};
  try {
    const response = await genAI.models.generateContent({
      model: 'gemini-2.0-flash-lite',
      contents: [
        { role: 'user', parts: [{ text: prompt }] }
      ],
      config: {
        responseMimeType: 'application/json',
        temperature: 0.1,
        responseSchema: schema, 
      }
    });
    res.json(response.text);
  } catch (error) {
    console.error('Error generating form blueprint:', error);
    res.status(500).json({ error: 'Failed to generate form blueprint' });
  }
});

It's quite obvious that while it's a bit of a long function, the only two differences from the previous example are the prompt and the schema. The rest is the same boilerplate, which is great news, since it means we can now build a component using this endpoint.

@Component({
    selector: 'app-generative-form',
    template: `
        <h2>Generative Form Component</h2>
        <textarea 
            #promptArea (keypress.enter)="prompt.set(promptArea.value)">
        </textarea>
        <button (click)="prompt.set(promptArea.value)">Generate</button>
        @if (formBlueprint.hasValue()) {
            <h3>Generated Form Blueprint:</h3>
            @for (field of formBlueprint.value(); track field.name) {
                <div>
                    <label [for]="field.name">{{field.name | titlecase }}</label>
                    @switch (field.type) {
                        @case ('input') {
                            <input [id]="field.name" [control]="$any(form)[field.name]" />
                        }
                        @case ('textarea') {
                            <textarea [id]="field.name" [control]="$any(form)[field.name]"></textarea>
                        }
                        @case('dropdown') {
                            <select [id]="field.name" [control]="$any(form)[field.name]">
                                @for(option of field.options; track option.value) {
                                    <option [value]="option.value">{{ option.label }}</option>
                                }
                            </select>
                        }
                        @default {
                            <div>Unknown field type: {{field.type}}</div>
                        }
                    }
                </div>
                {{formValue() | json}}
            }
        }
    `,
    imports: [Control, TitleCasePipe, JsonPipe],
})
export class GenerativeFormComponent {
    readonly #genAI = inject(GenAIService);
    prompt = signal('');
    formBlueprint = rxResource({
        params: () => ({prompt: this.prompt()}),
        stream: ({params}) => {
            if (!params.prompt) {
                return of(null);
            }
            return this.#genAI.generateForm(params.prompt);
        },
    });

    formValue = linkedSignal(() => {
        const blueprint = this.formBlueprint.value();
        if (!blueprint) {
            return null;
        }
        const value = {} as Record<string, any>;
        for (const field of blueprint) {
            value[field.name] = '';
        }
        return value;
    });

    form = form(this.formValue);
}

Now, let's carefully examine what we have done here

  1. Create a resource that calls the generateForm method of our service, then stores the value of the form as the array we mentioned
  2. Create a linked signal that derives its value from the form blueprint, and creates an object with keys being the names of the fields, and values being empty strings. This will be the base of the signal form
  3. Create a signal form from the linked signal using the form function; yes, this is that simple with signal forms!
  4. In the template, iterate over form fields and define appropriate controls based on the type of the field using @switch/@case blocks
  5. To be able to dynamically read form controls and bind them to inputs with the [control] directive, we need to use $any, since the form structure is not known at compile time (obviously, it is generated with an LLM)

And this is it! Now, the end user can simply type the description of the form they want to create, live preview it and play around, edit, iterate, and get to their final result! Let's now move to the final topic of this article (which is actually and opening of a whole new world) - function calling.

Function calling

Funnily enough, function or tool calling, while a fundamental pillar of building AI-powered applications and agentic workflows, is a feature that can essentially be thought of as a subset of structured outputs. Let's understand what it is in general and how it (slightly) differs from a structured output.

  • Structured outputs allow us to define a schema for the response we expect from the model, and the model will generate a response that adheres to that schema
  • Function calling lets us tell the LLM what functions we have available (like methods in frontend or backend), and the model will decide which one (API, database, search tool) it needs to call
  • We could theoretically achieve the same result by using structured output, however, with function calling, we can get both output (even structured) and function calls, thus being able to not only call functions, but show explainer messages and prompts from the model

So, let's start building an app command line, where user can input prompts that wil do commands in our app, like navigating to a certain page, changing settings, and more. The very first thing we will implement is the navigation command.

To achieve this, we need to define a schema that describes our function and its parameters. Here's what it might look like:

[
  {
    "name": "navigate",
    "description": "Navigates the user to the page defined by the URL",
    "parameters": {
      "type": "object",
      "properties": {
        "url": {
          "type": "string",
          "enum": [
            "/some-url",
            "/another-url",
            "/yet-another-url"
          ]
        }
      },
      "required": [
        "url"
      ],
      "propertyOrdering": [
        "url"
      ]
    }
  }
]

As we can see, it is almost identical to what we provided for structured output, with the only difference being that we have a name and description properties at the top level, which describe the function we are defining. These are of paramount importance, since the LLM will use those parameters to decide which function(s) to call out of many provided.

However, this setup is not very useful, since we provided some mock urls, but the URLs that our Angular app actually have are quite different, and also dynamic in the sense that in the future, more routes might become available, or old ones become obsolete.

To counter this, the best thing we can do is to generate the schema dynamically, based on the actual routes our Angular app has. To do that, we can use the Router service, and extract the routes from it. Here's how we can do it:

getRoutesSchema() {
    const routes = this.#router.config
      .filter(route => route.path) // filter out routes without a path
      .map(route => `/${route.path}`); // prepend '/' to each path
    return {
      name: 'navigate',
      description: 'Navigates the user the page defined by the URL',
      parameters: {
        type: 'object',
        properties: {
          url: {
            type: 'string',
            enum: routes,
          },
        },
        required: ['url'],
        propertyOrdering: ['url'],
      },
    };
}

Before we make this work, let's again think about what is going here. We are creating and object that describes a function that the LLM may or may not (this is important!) choose to invoke. When we say "invoke", in this context we do not mean it will actually invoke it, but rather tell us (our app) to do that (we are still able to opt out of actually doing it). The function is navigate, and it takes a single parameter url, which is a string, and can be one of the routes we extracted from the Angular Router. So, simply put, the LLM notifies the caller to execute navigate(url) to navigate to a page.

Now, we need to add a simple endpoint that will just pick up the user's prompt and the schema we dynamically obtained and call Gemini:

app.post('/command-line', async (req, res) => {
  const { commands, prompt } = req.body;

  if (!commands) {
    return res.status(400).json({ error: 'Commands are required' });
  }

  const finalPrompt = `
    You are an assistants who helps users execute commands in a web app defined in natural language. Take a look at the tools available and the user's prompt and decide which ones to call with what arguments.

    User prompt: ${prompt}
  `;

  const response = await genAI.models.generateContent({
    model: 'gemini-2.0-flash-lite',
    contents: prompt,
    config: {
      temperature: 0.1,
      tools: [{functionDeclarations: commands}],
    },
  });

  res.json(response);
});

As we can see, this time it's even simpler, as the bulk of the works is done in the frontend, and provided to Gemini as a toolset it can call. Here we also slightly augment the user's prompt to make Gemini focus on choosing a right tool, since it is a command tool rather than a chatbot.

Finally, we can implement the component that will provide the user interface for our command line:

@Component({
    selector: 'app-command-line',
    template: `
        <input (keyup.enter)="onEnter($event)" />
    `,               
})
export class CommandLineComponent {
    readonly #router = inject(Router);
    readonly #genAI = inject(GenAIService);

    onEnter(event: Event) {
        const input = (event.target as HTMLInputElement).value;
        const commands = [this.getRoutesSchema()];
        // callCommandLine simply takes the user's input and the commands schema and calls the /command-line endpoint we defined above 
        this.#genAI.callCommandLine(commands, input).subscribe(response => {
            const command = response.candidates[0]?.content.parts[0].functionCall;
            this.handleCommand(command);
        });
    }

    handleCommand(command?: {name: string; args: Record<string, any>}) {
        switch (command.name) {
            case 'navigate':
                const url = command.args['url'];
                // we might want to validate the URL here before navigating
                // since LLMs can sometimes hallucinate, in a production setting
                // it would be a good idea to check if the URL actually exists 
                // within our app routes 
                this.#router.navigateByUrl(url);
                break;
            default:
                console.warn(`Unknown command: ${command.name}`);
        }
    }

    getRoutesSchema() {
        // omitted for the sake of brevity, see above
    }
}

As we can see, the response.candidates[0]?.content.parts[0] item now contains a functionCall property, which is the function Gemini decided needs to be called and passed back to us. We then handle that result in a simple switch/case block, and call the appropriate Angular method - in our case, Router.navigateByUrl.

If we now open the component in a browser and type something like "navigate to writing assistant", the app will navigate to the writing assistant component we built previously! Absolutely amazing. Of course, for now we only have one command, but here is another that you can now implement on your own: a command that changes the theme of the app (light/dark). You can define a simple service that holds the current theme, and then define a function schema for changing the theme, and implement the command in the handleCommand method.

Conclusion

We built a lot on top of what we already had in the previous article. We learned:

  • structured outputs that can help us force the model to generate responses in a certain format, and how to use them to build generative UIs
  • function calling, which allows us to define functions (or tools) that the model can call
  • we touched very slightly on prompt engineering, augmenting the user's prompt to make it work better for our use case
  • we began a journey into agentic workflows, where the model can make decisions and call functions based on the user's input to come up with way more complex results than simply text generation

In the next article, we will go super deep and discuss embeddings - special numerical representations of text that allow us to do some absolutely spectacular things, like semantic search, text classification, and more. Stay tuned!

Small Promotion

Gemini and Angular, Part II: Structured Outputs and Tool calls
My book, Modern Angular, is now in print! I spent a lot of time writing about every single new Angular feature from v12-v18, including enhanced dependency injection, RxJS interop, Signals, SSR, Zoneless, and way more.

If you work with a legacy project, I believe my book will be useful to you in catching up with everything new and exciting that our favorite framework has to offer. Check it out here: https://www.manning.com/books/modern-angular

P.S There is one chapter in my book that helps you work LLMs in the context of Angular apps; that chapter is already kind of outdated, despite the book being published just earlier this year (see how insanely fast-paced the AI landscape is?!). I hope you can forgive me ;)


Gemini and Angular, Part II: Structured Outputs and Tool calls
]]>
<![CDATA[Angular Zoneless Unit Testing]]>

The Future Is Zoneless — What Can We Do Today?

You’ve probably heard: Angular is moving toward a Zoneless future. Migrating your Angular app to run without Zone.js brings several benefits, but for medium-to-large applications, the process might not be trivial.

The good news is

]]>
https://www.angularspace.com/angular-zoneless-unit-testing/68bb25ca57df5d000157d637Tue, 30 Sep 2025 06:30:10 GMT

Angular Zoneless Unit Testing

The Future Is Zoneless — What Can We Do Today?

You’ve probably heard: Angular is moving toward a Zoneless future. Migrating your Angular app to run without Zone.js brings several benefits, but for medium-to-large applications, the process might not be trivial.

The good news is that you can migrate an Angular application to Zoneless gradually. For example, migrating your components to use OnPush change detection is a recommended step toward Zoneless compatibility — and it also delivers immediate performance benefits. Nowadays, this has become much simpler when using Signals.

Another powerful step is adapting your unit tests to run in Zoneless mode, ensuring that the components under test are compatible with a Zoneless application — even before your app fully runs without zone.js.

While migrating unit tests will be our focus here, I recommend reading this article from Angular Experts for general tips on gradually adapting your application code to Zoneless.

Enabling Zoneless In Your Tests

First of all, add provideZonelessChangeDetection() to the list of your providers:

TestBed.configureTestingModule({
  providers: [
    provideZonelessChangeDetection(),
    // ...
  ]
});

It’s a good idea to enforce this for new component tests while gradually migrating existing ones. Depending on how your current components and tests are structured, you may encounter some failures once you enable it.

Avoid Calling detectChanges() — especially more than once

The Angular documentation specifically recommends against manually calling fixture.detectChanges() in your test code:

To ensure tests have the most similar behavior to production code, avoid using fixture.detectChanges() when possible. This forces change detection to run when Angular might otherwise have not scheduled change detection.

Instead, await fixture.whenStable() should be used:

// not recommended (still ok)
it('should do something', () => {
  const { page } = setup();

  page.triggerSomeAction();
  page.fixture.detectChanges();

  expect(something).toBe(true);
});

// recommended
it('should do something', async () => {
  const { page } = setup();

  await page.fixture.whenStable();

  expect(something).toBe(true);
});

I noted “still ok” in the comment above detectChanges() since:

For existing test suites, using fixture.detectChanges() is a common pattern and it is likely not worth the effort of converting these to await fixture.whenStable(). TestBed will still enforce that the fixture's component is OnPush compatible and throws ExpressionChangedAfterItHasBeenCheckedError if it finds that template values were updated without a change notification

However, I noticed that calling detectChanges() more than once often causes issues when used with OnPush and/or Zoneless:

// usually problematic - avoid!
it('should correctly react on action 1 and action 2', () => {
  const { page } = setup();

  page.triggerActionOne();
  page.fixture.detectChanges(); // first call to detectChanges()
  expect(something).toBe(true);

  page.triggerActionTwo();
  page.fixture.detectChanges(); // second call, often problematic!
  expect(something).toBe(true);
});

// do this instead - keep each action separate!
it('should correctly react on action 1', async () => {
  const { page } = setup();

  page.triggerActionOne();
  await page.fixture.whenStable();
  
  expect(something).toBe(true);
});

it('should correctly react on action 2', async () => {
  const { page } = setup();

  page.triggerActionTwo();
  await page.fixture.whenStable();

  expect(something).toBe(true);
});

Get rid of fakeAsync() and tick()

Yes, I was surprised too when I realized this. Unfortunately fakeAsync()and tick() rely on Zone.js and will no longer work once it’s completely removed as a dependency in your tests.

As I mentioned in another article, these helpers have been extremely popular in the Angular testing world, so chances are you have been using them in your project.

// will not work without the zone.js dependency
it('should do something async', fakeAsync(() => {
  const { page } = setup();

  page.doSomething();
  tick();

  expect(something).toBe(true);
}));

Note that you can keep using fakeAsync() and tick() even with provideZonelessChangeDetection() enabled as long as zone.js is included as a dependency in your unit tests. So this step can be postponed.

Alternatives to fakeAsync() and tick()

At the time of writing this article, the Angular team is working with testing library tools such as Jasmine and Jest to provide a proper alternative. They will likely update the Angular docs will new recommendations in the near future. Meanwhile, the recommended route is using await fixture.whenStable() instead.

However, while awaiting whenStable() is indeed the recommended approach when it works, in my experience there are two cases where it may not be suitable:

  • When it’s not available, because you’re testing something other than a Component (e.g. a Service)
  • When, for some reason, it doesn’t actually wait for the desired action to complete

To address this, I wrote a small utility called tickAsync(). The implementation is very basic, you can find it in the lightweight testing library ngx-page-object-model, or just copy it from here. Example usage:

import { tickAsync } from 'ngx-page-object-model'; // or copy it from GitHub

it('should do something async', async () => {
  const { page } = setup();

  page.doSomething();
  // use this instead of tick() whenever fixture.whenStable() cannot be used
  await tickAsync();

  expect(something).toBe(true);
});

With this I was able to fix most of the tests where whenStable() couldn’t help. Only in a few rare cases I had to also manually wait with a delay:

// ⚠️ it will ACTUALLY wait for 100ms - not ideal.
await tickAsync(100);

This is however, not ideal. Tests should not await real-time.

The best alternative for fake timers are actually provided by the testing framework such as Jasmine or Jest. While we wait for an official recommendation from Angular (you can follow this GitHub issue for more information), it is worth mentioning an example of using mock clocks in Jasmine:

it('should write the changed file content to the sandbox filesystem', () => {
  jasmine.clock().install();
  jasmine.clock().mockDate();
  const newContent = 'new content';

  const nodeRuntimeSandboxSpy = spyOn(fakeNodeRuntimeSandbox, 'writeFile');

  dispatchDocumentChange(newContent);
  jasmine.clock().tick(EDITOR_CONTENT_CHANGE_DELAY_MILLIES);

  expect(nodeRuntimeSandboxSpy).toHaveBeenCalledWith(service.currentFile().filename, newContent);
  jasmine.clock().uninstall();
});

Also worth mentioning this PR to the Jasmine library from Andrew Scott who has been heavily involved in the support for Zoneless. Special thanks to Matthieu Riegler for suggesting these examples.

So using the mock clocks provided by the testing library is the best way to replace tick(). Different testing libraries such as Jest have similar APIs and describing all of them goes beyond the scope of this article.

Remove zone.js dependency entirely from your tests

Once you have reached the point where nothing in your tests rely on zone.js, you can remove it entirely as a dependency of your tests.

Remove this from your “test” target in project.json (NX) or angular.json (Angular CLI) file:

// remove this line
"polyfills": ["zone.js", "zone.js/testing"],

Or, if your project has a test.ts setup file, make sure it no longer imports zone.js:

// delete these lines
import 'zone.js';
import 'zone.js/testing';
Angular Zoneless Unit Testing
Mount Etna — Photo by Nienke Koedijk

Conclusions

  • Angular is going to be Zoneless, this will bring several benefits
  • Migrating to Zoneless is usually a complex process that can be done gradually
  • Using OnPush and Signals paves the way to Zoneless compatibility
  • Unit Tests can help you check whether your components work in a Zoneless app even before you fully switch your app to Zoneless
  • You can enable Zoneless mode selectively for individual unit tests
  • Avoid calling fixture.detectChanges() in your tests — especially multiple times. Prefer await fixture.whenStable() instead
  • Avoid using fakeAsync() and tick() as they cannot be used without Zone.js — use mock clocks instead

Angular Zoneless Unit Testing
]]>
<![CDATA[Building AI-powered apps with Angular and Gemini]]>Like it or not, we live in the age of AI, and it can be both exciting and frustrating. On one hand, AI can help us unlock almost unlimited capabilities for the apps we build; if in the past, tasks like image recognition or text classification could be a dealbreaker

]]>
https://www.angularspace.com/building-ai-powered-apps-with-angular-and-gemini/68d2a6047d52da0001c1ba4eMon, 29 Sep 2025 09:27:51 GMT

Like it or not, we live in the age of AI, and it can be both exciting and frustrating. On one hand, AI can help us unlock almost unlimited capabilities for the apps we build; if in the past, tasks like image recognition or text classification could be a dealbreaker for the average developer, today, customers almost assume one would be able to pull things like this off in ridiculously short timeframes.

On the other hand, however, the landscape of AI development is probably best described as "hostile". We move at incredible speeds, new models and approaches drop and then die out before we even have time to properly use them, lots of tutorials are actually disguised ways of selling us something, and lots of information is still being gatekeeped by more seasoned developers, hidden behind buzzwords like "agentic AI" , "RAG" and so on.

This can be particularly harsh if you're a JavaScript developer, especially working on frontend, like me, with Angular. "Do I need to learn Python to do AI?", "I'm not a backend developer, how can I build apps with AI?", and "How do I even get started, there's so much stuff out there!" are all questions I've heard from developers in my community. Well, to be honest, those are questions I've asked myself as well.

So, for the past couple of months I've been working with the Gemini API, building different apps and tools, and now is the time I dispel some myths for other Angular developers and help them begin their AI journey too. So, hereby, we begin a series of articles on how to build AI-powered apps with Angular and Gemini.

This is going to be a ground-up tutorial, broken down into atomic topics that will help you get started without issue. The good news is, no prior knowledge is assumed! If you are an Angular developer who has no idea how people build apps on top of AI models, this is where you start!

About the article series

In this series, we will cover the following topics:

  1. Getting started with Gemini API: Accessing the, API, making requests, creating chats, and the most important configuration options.
  2. Using embeddings: Learn about how we can utilize LLMs for more than just generating text
  3. Building RAGS: Learning about retrieval augmented generation and how to build RAG apps with Gemini and an Angular frontend
  4. Using multimodal capabilities: How to work with images, audio, and video in Gemini
  5. Building agentic AI apps: How to build apps that can reason, plan, and execute tasks on their own
  6. Slightly touching machine learning: How we can actually forgo LLMs entirely and build way more reliable AI tools tailored for very specific tasks
  7. A lot more!

Please do not assume this is going to be just a 7 article series, it is very possible I will break down the topics into way smaller chunks, so could end up with a lot more articles than just 7!

So, let's start our journey into AI + Angular!

Getting started with Gemini API

Before we proceed, we must understand how exactly people build apps on top of LLMs. If you did not know that, and just assumed developers make API calls to OpenAI or Gemini or whatnot, well, you were entirely correct! (I swear I didn't generate this last phrase with AI :D).

However, it is even better than that, since Gemini (and other LLM providers, but we focus only on Gemini) offer a specialized SDK that makes it incredibly easy to work with the API. To get started, let's generate a new Angular app, and install the Gemini SDK inside it.

npm i @google/genai

This will install the Google Generative AI SDK, which we will then use to make requests to the Gemini API in a way that is better than spamming fetch calls.

Now, before we proceed, we need to get over the first obstacles newcomers have to face, which often, in my experience, result in people being intimidated and giving up; that is getting an API token, and setting up billing, which lots of less experienced developers associate with spending money without seeing it, making them hesitant and afraid of sudden large charges.

However, I have great news, since Google offers both a free tier, and some pretty decent models that are very cheap! So, let's do the following steps:

  1. Go to the Google Cloud Console and log in with your Google account.
  2. Create a new project, give it a name that you will remember
  3. Then head over to Google AI Studio: https://studio.google.cloud.com/
  4. Find the "Get API key on the left sidebar" to navigate to this page
  5. Create a new API key associating the key with the project you created in step 2
  6. Copy the API key somewhere safe, we will need it in a moment

Now, since we have the API key, we might be tempted to just create an Angular service, create the API instance with our key, and start making requests. Please do not do this! Think about it for a moment, if we put the API key in our frontend code, anyone can just open the devtools, find the key, and start making requests on our behalf, which is both a security and monetary risk.

Instead, we are going to build a small backend that will handle the AI part for us, and an Angular service that will make requests to our backend. This way, our API key is safe, and we can also implement additional logic in our backend, like caching, rate limiting, and so on.

Don't be too hesitant here, we are not going to build a complex backend, but essentially a thin wrapper between our frontend and the Gemini API. We are going to use Express.js for this; if you're not familiar with it at all, keep reading the following sections; if you are you can skip to the final code example at the end of the section and continue from there.

Building a small Express.js backend

Express.js is a minimal Node.js web framework that allows us to build backend apps via Node.js. To get started, we can, in the same Angular project we just started, install express:

npm install express cors

Then, we can create a new file called server.js in the root of our project, and add the following code:

const express = require('express'); // importing express
const app = express();
const cors = require('cors'); // to handle CORS

app.use(cors()); // enable CORS
app.use(express.json()); // to parse JSON bodies

app.get('/', (req, res) => {
  res.send('Hello!');
}); 

app.listen(3000, () => {
  console.log('Server is running on port 3000');
});

Afterwards, we can open http://localhost:3000 in our browser, and we should see "Hello!" displayed.

Short recap: we created an express app, and declared a route, which, when visited via a browser or a direct API call like fetch, will return the text "Hello!". We also set up the app to listen on port 3000, which is where we can access it.

Now, we want to actually create an endpoint that will allow us to play with the Gemini API. Before we do that, we need to figure out where to put our API key, since, again, we cannot put it into the source code itself (what if we want to push it to Github, for example?). The best way to do this is via environment variables, which could already be familiar for you even if you have been working with Angular exclusively your entire life.

A popular way to do that in NodeJS is via .env files and using the dotenv package. So, let's install it:

npm install dotenv

Then, create a new file called .env in the root of your project, and add the following line:

GEMINI_API_KEY=your_api_key_here

Then, we will slightly modify our server.js file to load the environment variables from the .env file:

const express = require('express'); 
require('dotenv').config(); // load environment variables from .env file
// rest of the code stays the same for now

Now, to see this in action, let's modify our "Hello!: endpoint and make it actually generate some content via Gemini. First, we will import the GenAI SDK< create an instance of the API client, and then use it to generate some text:

const genAI = new GoogleGenAI({});

app.get('/', async (req, res) => {

  const response = await genAI.models.generateContent({
    model: 'gemini-1.5-pro', // specify the model to use
    contents: 'Give me a random greeting',
  });

  res.json(response); // return the response as JSON
});

Now, what we see here is very simple without even an explanation; we create an instance of GoogleGenAI, and then in our endpoint, we call the generateContent method, specifying the model we want to use (in this case, gemini-1.5-pro, which is a pretty capable model for most tasks), and the content we want to generate. The response from the API is then returned as JSON.

One interesting thing here is that we did not specify the API key anywhere in the code. This is because the GenAI SDK automatically picks up the API key from the environment variable GEMINI_API_KEY, which we set in our .env file. This is a very convenient feature, as it allows us to keep our API key out of the source code completely.

Now, let's head to http://localhost:3000 in the browser once more to examine the response. We might see something like this:

app.use(express.json()); // to parse JSON bodies
{
  "sdkHttpResponse": {
    "headers": {
      <A lot of header here>
    }
  },
  "candidates": [
    {
      "content": {
        "parts": [
          {
            "text": "Howdy!\n"
          }
        ],
        "role": "model"
      },
      "finishReason": "STOP",
      "avgLogprobs": -0.0416780412197113
    }
  ],
  "modelVersion": "gemini-1.5-pro-002",
  "usageMetadata": {
    "promptTokenCount": 5,
    "candidatesTokenCount": 3,
    "totalTokenCount": 8,
    "promptTokensDetails": [
      {
        "modality": "TEXT",
        "tokenCount": 5
      }
    ]
  }
}

There might be other fields in the response too, but we mainly care about these 3 top level fields, so let's quickly explore them:

  • sdkHttpResponse: This contains the raw HTTP response from the API, including headers and status code. This can become very useful if the model, for whatever reason, returns an error, and we want to either debug it or show a proper message to the user.
  • candidates: This is the main star of the show, as it contains the actual generated content from the model. In this case, we asked for a random greeting, and the model responded with "Howdy!". The candidates array can contain multiple responses, which will become important when we start streaming responses instead of picking the finalized one
  • usageMetadata: This contains information about the token usage for the request, which can be useful for monitoring and optimizing costs. We will explore this a bit later.

Now, since we have our backend set up, we can now create an Angular service that will make requests to our backend instead of directly to the Gemini API.

Creating an Angular service to interact with our backend

For now, we only covered a very small portion fo what we can do with Gemini API, so let's take it a step further and define an endpoint that actually takes some input from the user and responds to it, instead of just generating a greeting:

app.post('/generate', async (req, res) => {
  const { prompt } = req.body; // get the prompt from the request body

  if (!prompt) {
    return res.status(400).json({ error: 'Prompt is required' });
  }

  try {
    const response = await genAI.models.generateContent({
      model: 'gemini-1.5-pro',
      contents: prompt,
    });

    res.json(response);
  } catch (error) {
    console.error('Error generating content:', error);
    res.status(500).json({ error: 'Failed to generate content' });
  }
});

While this a bit more code than we had previously, it does not do anything complex, and simply takes a prompt from the request body, and uses it to generate content via Gemini. If the prompt is missing, it returns a 400 error, and if there's any error during the generation, it returns a 500 error, pretty standard stuff.

Now, we can go ahead and create an Angular service that will make requests to this endpoint.

export type GeminiResponse = {
    candidates: {
        content: {
            parts: {
                text: string
            }[]
        }
    }[];
}

@Injectable({providedIn: 'root'})
export class GenAIService {
    readonly #http = inject(HttpClient);

    generateContent(prompt: string) {
        return this.#http.post<GeminiResponse>('http://localhost:3000/generate', {prompt}).pipe(
            // map the response to just return the generated text
            map(
                response => response.candidates[0]?.content.parts[0].text || 'No response',
            )
        )
    }
}

As we can see, the GeminiResponse type we created is a quite intimidating on its own, even more so considering we omitted most of the fields, leaving behind only the part that actually contains the generated text. However, don't be scared by it, simply copy it and keep, since 90% of the time you will only care about the inner parts field that contains the actual data.

Now, we can set up a component to use this service and display the generated content:

@Component({
  template: `
    <div class="container">
      <h2>AI Text Generator</h2>
      
      <form #textForm="ngForm" (ngSubmit)="generateResponse()" class="form">
        <div class="input-group">
          <label for="prompt">Enter your prompt:</label>
          <textarea 
            id="prompt"
            name="prompt"
            [(ngModel)]="prompt"
            required
            placeholder="Type your prompt here..."
            rows="4"
            class="textarea">
          </textarea>
        </div>
        
        <button 
          type="submit" 
          [disabled]="!textForm.form.valid"
          class="submit-btn">
          Generate Response
        </button>
      </form>
      @let response = generatedResponse();
      <div class="response-section">
          <h3>Response:</h3>
          <div 
              [class.response-box]="response.error === null"
              [class.error-box]="response.error !== null">
              {{ response.text }}
          </div>
      </div>
    </div>
  `,
})
export class GenerateTextComponent {
    readonly #genAI = inject(GenAIService);
    prompt = signal('');
    generatedResponse = signal<{text: string, error: string | null}>({
        text: '', 
        error: null,
    });

    generateResponse() {
        // I would be very very happy to do this via resources
        // but they do not yet support POST requests
        // P.S. read more about resources in my article: https://www.angularspace.com/meet-http-resource/
        this.#genAI.generateContent(this.prompt()).subscribe({
            next: (response) => this.generatedResponse.set({
                text: response, error: null,
            }),
            error: () => this.generatedResponse.set({
                text: '', error: 'Error generating text',
            })
        });
    }
  
}

As we can see, on the frontend part this is reasonably simple, as we just invoke our service, make the HTTP request, store the data in a signal, and display it. Nothing too fancy, and nothing too complex.

At this point, we might want to get excited and jump into more complex stuff, like making chats, streaming responses and so on, however, I suggest we make a sideways move and explore a little bit some of the configuration options we have when making requests to Gemini, since this will help us a lot in the future.

Configuring Gemini API

Configuring models

Let's go back for a moment, and remember that we selected a specific model to make requests to, namely gemini-1.5-pro. While this is a very capable model by itself, we might want to explore other models, since different tasks might require using more (or sometimes, surprisingly, less!) powerful models.

We can do this by creating an endpoint that specifically lists the available models:

app.get('/models', async (req, res) => {
  try {
    const response = await genAI.models.list();
    res.json(response);
  } catch (error) {
    console.error('Error listing models:', error);
    res.status(500).json({ error: 'Failed to list models' });
  }
});

Now, we can create a component with an httpResource that allows us to see the list of models:

@Component({
  template: `
    <div class="models-container">
      <h2>Available Models</h2>
      
      @if (modelsResource.isLoading()) {
        <div class="loading">Loading models...</div>
      }
      
      @if (modelsResource.error()) {
        <div class="error">Error loading models: {{ modelsResource.error() }}</div>
      }
      
      @if (modelsResource.value(); as models) {
        <ul class="models-list">
          @for (model of modelsResource.value()?.pageInternal; track model.name) {
            <li class="model-card">
              <h3>{{ model.displayName }}</h3>
              <p><strong>Name:</strong> {{ model.name }}</p>
            </li>
          }
        </ul>
      }
    </div>
  `,
})
export class ModelsListComponent {  
  modelsResource = httpResource<{pageInternal: {name: string, displayName: string}[]}>(
    () => 'http://localhost:3000/models'
);
}

Now, if we open this component, we will see a big list (around 50) of Gemini models, each tailored for different tasks. Some models are better at reasoning, useful for tasks involving complex instructions and logic, some are simply better as conversational and work faster, some are specialized for image or video generation, and some are embedding models (we will learn about those in later articles of this series).

Choosing a model is a challenging task, and often requires some trial and errors, but in general it's important to keep in mind that choosing a model usually comes down to balancing the following three factors:

  1. Cost of the model: more capable ones are usually more expensive
  2. Speed of generation: models that have reasoning capabilities are usually slower unless we disable the thinking mode, but can be better for solving tasks instead of just text generation
  3. The task at hand we're trying to solve will influence the model choice, very often we do not need to latest and shiniest models, just a simple one that can do the job decently, saving us money and the user's time

You can read way more about the models, their capabilities and pricing in the official documentation.

Now, let's take a look at what actually goes into pricing the model usage

How much will you spend

Let's go back to our very first example, and take a look at the usageMetadata field in the response:

{
  "usageMetadata": {
    "promptTokenCount": 5,
    "candidatesTokenCount": 3,
    "totalTokenCount": 8,
    "promptTokensDetails": [
      {
        "modality": "TEXT",
        "tokenCount": 5
      }
    ]
  }
}

As we can see, the usageMetadata field contains information about the token usage for the request. If you're unaware what "tokens" are in the context of LLMs, keep reading, but if you know what that is, feel free to skip the next 3 paragraphs.

To understand what tokens are, we need to understand (not deeply, at a very high level), how LLMs work. A large language model generates text by predicting the next "token" in a series of tokens. A token can be a word, a part of a word, a punctuation mark, or a special symbol.

To quickly and visually understand what tokens are, we can use the OpenAI tokenizer tool. For example, if we input the text "Hello, world!", we will see that it is broken down into 4 tokens: "Hello", ",", " world", and "!".

LLMs work by taking your text, breaking it down into tokens, and then predicting the next token based on the previous ones. This is how they generate coherent and relevant text. It is important to know that tokens are essentially fixed, so in different contexts, the same text will always be broken down into the same tokens.

Understanding what tokens are, we can now understand how Gemini API pricing works. The amount you will be charged for using a model is based on the number of tokens processed during your requests. This includes both the tokens in your input (the prompt you send to the model) and the tokens in the output (the text generated by the model). If we revisit the Gemini API models list page, we will see that each model has a different price for input and output tokens (with output tokens usually being more expensive).

While the prices might seem intimidating at first, it's important to take note that those are prices for generating 1 million (!) tokens, which, for our local, learning-oriented app is simply a grotesquely big amount. It is roughly equal to 750,000 words, which is double the word count of the entire "Lord of the Rings" trilogy! So, for most learning and prototyping purposes, you will be spending just a few cents, if anything at all (and that is if you go out of the free tier limits).

Now, as we gathered an understanding of how the API pricing works, let;s finally explore some parameters we can use to modify the responses we get from an LLM at a very high level before finalizing the first part of this series.

LLM response configuration

Congratulations, you have arrived at the buzzword section of this article! Here, we will explore some of the most important options LLMs can take, of which you have undoubtedly heard of, even if maybe not comprehending them fully. These are temperature, topP, topK, maxOutputTokens.

Here's a high level overview:

  • temperature: This parameter controls the randomness of the model's output.

Previously, we explained that LLMs generate text by predicting the next token based on the previous ones. This was a bit of an oversimplification; LLMs don't come up and say "the next token is cat!". Instead, they generate probabilities for each possible token first, so, for instance, they may say something like "there's a 30% chance the next token is cat, a 25% chance it's dog, a 15% chance it's fish, and so on". And usually, they do not simply pick the most probable token. Think about it, if they simply always chose the most probable token, they would become robotic and repetitive, which is not what is usually desired.

Instead, they often try to pick some tokens that might be less probable, but still make sense in the context. This is where temperature comes into play; it controls how much randomness we want in the token selection process. A low temperature (e.g., 0.2) makes the model pick the highest probable words more often, resulting in a more predictable text, while a higher temperature (e.g., 0.8) makes the model pick less probable words more often, resulting in more creative text.

Note: temperature is not some hard science setting; you might have heard somewhere that a temperature of "0" makes the model "deterministic" (oh boy do AI folks love buzzwords), but in reality, it is still not what we would mean by that words in a usual, literary context, since a slight variation in the input prompt (like a missing comma) might still result in a vastly different LLM output. Temperature can be useful from time to time, depending on the task, but is not a silver bullet to fix your LLM's output

  • topK: This parameter is used to make a hard limit for choosing the next token; if temperature allowed us to pick less probable tokens out of all possible tokens, topK actually just limits the set of tokens to choose from, so if we set the limit to, say, 3, the model will only consider the 3 most probable tokens when picking the next one. To be honest, most people usually do not bother with this, since it is a quite blunt tool and there is no way to predict if actually the 3 (or 7, or 12) tokens are actually the relevant ones, or if some lower-valued token is actually better

  • topP: This parameter is a bit more complex and more useful than topK. Instead of limiting the number of tokens to choose from, it limits the cumulative probability of the tokens to choose from. "Cumulative" here simply means "adding up the probabilities until we reach a certain threshold". For example, if we set topP to 0.9, the model will consider the most probable tokens until their combined probability reaches 90%. This allows for a more dynamic selection of tokens, as the number of tokens considered can vary based on their probabilities.

  • maxOutputTokens: This parameter simply limits the maximum number of tokens the model can generate in its response. This is useful to prevent the model from generating excessively long responses, which can be costly and time-consuming. For example, if we set maxOutputTokens to 50, the model will stop generating text after producing 50 tokens.

Important: the maxOutputTokens just bluntly limits the amount of generated tokens, it does not make the model generate shorter text. It is supposed to be a more of a safety net to ensure your LLM does not go ballistic and cost you a fortune. For generating shorter content, you should look into writing prompts that guide the model to try to be shorter (again, not an exact science, but usually works decently)

All the parameters we mentioned are available in the Gemini API SDK, and we can now modify our text-generation endpoint to accept them as well:

const response = await genAI.models.generateContent({
    model: 'gemini-1.5-pro',
    contents: prompt,
    config: {
      topP: 0.5,
      temperature: 0.1,
      maxOutputTokens: 50,
    }
});

If we now retry the same messages we put into our Angular app's generate page, we might see more coherent and fixed responses, and, since maxOutputTokens is set to such a low value, some messages might be cut-off.

Conclusion

Wow, I bet this was a lot of information. However, this article might also leave you feeling that we have just scratched the surface (which is true). So, let's recap

  • We learned how LLMs generate text, learned about tokens, configurations parameters like temperature, topP and so on
  • We learned how to create a Google Cloud project, get a Gemini API key, and use the SDK to make text generation requests
  • We learned about the diverse set of models we can utilize in the future for different tasks
  • We did all of this while requiring minimally more knowledge than any Angular developer possesses

If this was exciting, wait until you hear about the next article! In the second one, we are going to

  • Learn how to stream responses to make up for a better UX
  • Learn how to create chats, and maintain context between messages
  • Learn a bit of prompting to secure better responses from the model
  • Touch on structured outputs which can help us solve more direct tasks than just text generation

I hope you enjoyed this article, and see you in the next one!

Small Promotion

Building AI-powered apps with Angular and Gemini
My book, Modern Angular, is now in print! I spent a lot of time writing about every single new Angular feature from v12-v18, including enhanced dependency injection, RxJS interop, Signals, SSR, Zoneless, and way more.

If you work with a legacy project, I believe my book will be useful to you in catching up with everything new and exciting that our favorite framework has to offer. Check it out here: https://www.manning.com/books/modern-angular

P.S There is one chapter in my book that helps you work LLMs in the context of Angular apps; that chapter is already kind of outdated, despite the book being published just earlier this year (see how insanely fast-paced the AI landscape is?!). I hope you can forgive me ;)


Building AI-powered apps with Angular and Gemini

Building AI-powered apps with Angular and Gemini
]]>
<![CDATA[afterRenderEffect, afterNextRender, afterEveryRender & Renderer2]]>Recently I’ve been playing around with some Angular functionalities, which are: effect, afterRenderEffect, afterNextRender, afterEveryRender and Renderer2. You don’t see them used much compared to signals or computed. Maybe only effect is more common, however how and when to use the rest?

I wanted to write

]]>
https://www.angularspace.com/afterrendereffect-afternextrender-aftereveryrender-renderer2/68b9e00f57df5d000157d5f9Tue, 16 Sep 2025 11:29:19 GMT

Recently I’ve been playing around with some Angular functionalities, which are: effect, afterRenderEffect, afterNextRender, afterEveryRender and Renderer2. You don’t see them used much compared to signals or computed. Maybe only effect is more common, however how and when to use the rest?

I wanted to write about them because I kept mixing them up myself, therefore this post is as much for me as for anyone else. I want to touch on each of them, look at some examples, how they differ, and also check how they behave with SSR.

Using effect

The effect() will run at least once and then every time the dependency signal (or multiple one) changes. A shameless plug to my Senior Angular Interview Questions, I talked about the diamond problem in RxJS and how it’s not present in signals. Meaning if you have an effect with multiple signal dependencies and you update those signal, one after another, then the effect will still run only once, since signals are synchronous compared to Observables which are asynchronous and the logic could be re-executed multiple times, causing side-effects.

Use effect when you want to bridge the gap between a reactive state (signals) and a non-reactive execution. Use cases which you hear many times are mainly DOM updates, logging, or even executing a fetch() API call to the server. Other less common examples may be for example local storage updates, analytics tracking, chart data updates or setting loading state.

// state of the used theme
readonly theme = signal<'light' | 'dark'>('light');

// track what page we are on
readonly currentPage = signal('home');

// data to render a chart
readonly chartData = signal([1, 2, 3]);

// loading state of the app
readonly loading = signal(false);

constructor() {
  effect(() => {
	// change theme & save it
    document.body.dataset.theme = this.theme();
    localStorage.setItem('theme', this.theme());
  });
  
  effect(() => {
	// sends data to a 3rd party
    analytics.trackPage(this.currentPage());
  });
  
  effect(() => {
	// updates values in the chart
    updateChart(this.chartData());
  });
  
  effect(() => {
    // dependency to listen to
    const chartData = this.chartData();
    
    untracked(() => {
	    this.loading.set(true);
    })
  })
}

When it comes to SSR, you have to be a bit careful with effect(). It also runs on the server, at least once (even if dependencies are undefined), and then all the time when its dependencies change. An empty effect is also executed: effect(() => console.log('Empty effect'));

afterRenderEffect, afterNextRender, afterEveryRender & Renderer2
Empty Effect Execution

In SSR you don’t actually have a browser, so there’s no window, document, or localStorage. If you drop DOM calls or browser APIs directly into an effect, it’ll throw during the server render. The trick is to only use effect() on the server for things that are safe in a Node environment. Anything that touches the DOM should be wrapped in something like isPlatformBrowser or pushed into afterRenderEffect, which runs only in the browser after Angular has finished painting.

Another point worth highlighting is cleanup function - EffectCleanupRegisterFn. Just like in RxJS where you unsubscribe from a stream, effect() also gives you a way to tear things down when the effect is destroyed. This is helpful when wiring up elements like event listeners, intervals, or external libraries that require explicit cleanup.

It's the first available argument in the effect() function. There is no naming convention, it will work with any name, but most developers call it onCleanup(). Use it inside your effect to make sure you don’t leak memory or leave hanging listeners when the component goes away. It’s easy to forget, but in larger apps, this can save you from nasty performance issues.

@Component({
  selector: 'app-resize-listener',
  template: `
    <p>Window width: {{ width() }}</p>
  `
})
export class ResizeListenerComponent {
  // reactive signal that stores the current width
  readonly width = signal(window.innerWidth);

  constructor() {
    effect((onCleanup) => {
      const updateWidth = () => this.width.set(window.innerWidth);
      window.addEventListener('resize', updateWidth);

      // cleanup when effect is destroyed
      onCleanup(() => {
        window.removeEventListener('resize', updateWidth);
      });
    });
  }
}

Using afterRenderEffect

The afterRenderEffect is more preferable for DOM updates, browser APIs, or integrations that don’t make sense on the server (like canvas drawing, chart rendering, or measuring element sizes). It is executed after Angular has painted the view in the browser. From the first examples, you could say that updating data in charts is better suited for the afterRenderEffect since we are performing a DOM update.

When you open up the documentation for this function, Angular team highlights that “You should prefer specifying an explicit phase for the effect instead, or you risk significant performance degradation.

In real life, you may have an example of displaying a PDF file to an user and you want to track (in percentage) how far he has scrolled in the document. One of the (many) ways how to achieve this behavior is the following:

@Component({
  selector: 'app-root',
  template: `
    <div #divTop style="height: 20px; position: sticky; top: 0"></div>

    <div #divRef style="height: 400px; overflow: scroll">
      <!-- this is the PDF -->
      <div style="height: 3000px; background: red"></div>
    </div>
  `,
})
export class App {
  readonly divRef = viewChild<ElementRef<HTMLDivElement>>('divRef');
  readonly divTop = viewChild<ElementRef<HTMLDivElement>>('divTop');

  readonly scrollPercentage = toSignal(
    toObservable(this.divRef).pipe(
      filter((el) => !!el),
      switchMap((el) =>
        fromEvent(el.nativeElement, 'scroll').pipe(
          map(() => {            
            const scrollHeight = divRef.nativeElement.scrollHeight ?? 1;
		    const clientHeight = divRef.nativeElement.clientHeight ?? 1;
		    const scrollTop = divRef.nativeElement.scrollTop ?? 0;
		
		    const scrolled = Math.round(
		        (scrollTop / (scrollHeight - clientHeight)) * 100
		    );
		        
		    return scrolled;
          })
        )
      )
    ),
    { initialValue: 0 }
  );

  constructor() {
    afterRenderEffect({
      // creating dependency on the scroll signal
      earlyRead: () => this.scrollPercentage(),
      // write to DOM every time scrollPercentage emits
      write: (val, cleanUp) => {
        const divTop = this.divTop();
        if (!divTop) {
          return;
        }

        divTop.nativeElement.innerText = `Scroll: ${val()}%`;
      },
    });
  }
}
afterRenderEffect, afterNextRender, afterEveryRender & Renderer2
Scroll Attached Using afterRenderEffect

In the above example, the afterRenderEffect is triggered once the browser finishes painting the DOM element. It then uses the earlyRead callback to register the scrollPercentage signal dependency. Every time scrollPercentage emits (as you scroll), the write phase is triggered to update the DOM.

Of course you can achieve this exact result without using the afterRenderEffect and just directly interpolating the scrollPercentage signal in the HTML ({{ scrollPercentage() }}).

The documentation about afterRenderEffect also talks about the read optionality, so what’s the difference?

  • Use earlyRead when you want to read the DOM before any writes happen. Angular runs the earlyRead callback first, so you can grab measurements (width, height, etc.), before any DOM updates, and pass that information to the write phase.
  • Use read when Angular has applied all styles in the write phase, as this one runs after it. It’s good when you want to read some correct measurements after all UI changes, but you can not pass those values back to the write phase. Only earlyRead allows passing values to the write operation.

You also have the option to use the mixedReadWrite phase, which allows you to read and subsequently write data to the DOM; however, Angular recommends avoiding it and using the previously described ones. The phase order:

  1. earlyRead
  2. write
  3. mixedReadWrite
  4. read.

One other great resource I’ve found is from Code Shots With Profanis - Get to Know the AfterRenderEffect hook in Angular. I do recommend checking out his explanation on this topic.

From my understanding, when you only use client-side rendering and you ignore the phases in afterRenderEffect, then it behaves the same as effect. My above example with the scroll can also be achieved by the following:

  constructor() {
    // example 1
	afterRenderEffect(() => {
	  const val = this.scrollPercentage();
	  const divTop = this.divTop();
	
	  divTop.nativeElement.innerText = `Scroll: ${val}%`;
	});
	  
	// example 2
    effect(() => {
      const val = this.scrollPercentage();
      const divTop = this.divTop();

      divTop.nativeElement.innerText = `Scroll: ${val}%`;
    });
  }

NOTE: However, not utilizing the rendering phases, you are risking layout thrashing. It happens when the browser is forced to repeatedly recalculate the layout, because your code is reading from and writing to the DOM in an uncoordinated way, creating a so called loop. By separating earlyRead (all reads) from write (all writes), Angular batches DOM reads before any writes happen. This avoids any unexpected looping behavior.

Using afterNextRender and afterEveryRender

The documentation about these two function says that: “we can register a render callback to be invoked after Angular has finished rendering all components on the page into the DOM.”. The idea is the following:

  • afterNextRender runs only once after Angular paints the view for the first time
  • afterEveryRender runs after every render cycle, like a subscription to render events

If we were to compare these two functions to a life cycle hook, the closest (in behavior) we would get is afterNextRender to ngAfterViewInit and afterEveryRender to ngAfterViewChecked, as afterEveryRender runs every time a tick() re-renders something dirty. Compared to life cycle hooks, afterNextRender and afterEveryRender only run on the client side, whereas life cycle hooks are also triggered on SSR.

The other difference is that the hooks are scoped at the component level, while afterNextRender and afterEveryRender are scoped to the rendering of the whole app (the page we are looking at). Angular’s team also provides a simple diagram to better understand the execution order.

afterRenderEffect, afterNextRender, afterEveryRender & Renderer2
Angular Initialization

Understanding afterNextRender is a bit simpler, since it is executed only once, after DOM has been painted. You can put logic from ngOnInit into it, as afterNextRender is called inside the constructor. You can render data into charts once, or focus on an empty input element:

@Component({
  selector: 'app-root',
  template: `
     <input #input placeholder="first" value="Test1" />
     <input #input placeholder="second" />
     <input #input placeholder="third" />
  `,
})
export class App {
  readonly inputs = viewChildren<ElementRef<HTMLInputElement>>('input');

  constructor() {
    afterNextRender(() => {
      const inputs = this.inputs();
      const firstEmpty = inputs.find((d) => d.nativeElement.value == '');

	  // this will focus on the 'second' input
      firstEmpty?.nativeElement?.focus();
    });
  }
}
afterRenderEffect, afterNextRender, afterEveryRender & Renderer2
Empty Input Focus

Using afterEveryRender is, at least for my understanding, used for less common use cases. The question is: when do we actually need to execute some code after every rendering cycle? One example is the' onStable' method of ZoneJS. The onStable method is triggered every time a change detection occurs, and as we move toward zoneless applications, you can copy onStable logic into afterEveryRender, which will result in the same behavior. See the following code and GIF for a demonstration.

@Component({
  selector: 'app-resize-listener',
  standalone: true,
  template: `
    <button (click)="onClick1()">Empty Button</button>
    <button (click)="onClick2()">Text Button</button>

    <p>Text: {{ text() }}</p>
  `,
})
export class ResizeListenerComponent {
  private readonly ngZone = inject(NgZone);

  readonly text = signal('');

  constructor() {
    this.ngZone.onStable
      .asObservable()
      .subscribe((e) => console.log('ZoneJs - triggered'));

    afterEveryRender(() => {
      console.log('afterEveryRender - triggered');
    });
  }

  onClick1() {}

  onClick2() {
    this.text.update((prev) => `${prev}K`);
  }
}
afterRenderEffect, afterNextRender, afterEveryRender & Renderer2
NgZone vs AfterEveryRender

In theory I could rewrite my first example with scroll value from Observables into afterEveryRender and it would work the same way:

@Component({
  selector: 'app-root',
  template: `
    <div #divTop style="height: 20px; position: sticky; top: 0"></div>

    <div #divRef style="height: 400px; overflow: scroll;">
      <div style="height: 3000px; background: red"></div>
    </div>
  `,
})
export class App {
  readonly divTop = viewChild<ElementRef<HTMLDivElement>>('divTop');
  readonly divRef = viewChild<ElementRef<HTMLDivElement>>('divRef');

  constructor() {
    afterEveryRender({
      earlyRead: () => ({
        divRef: this.divRef(),
        divTop: this.divTop(),
      }),
      write: (elements) => {
        const { divRef, divTop } = elements;
        if (!divTop || !divRef) {
          return;
        }

		const scrollHeight = divRef.nativeElement.scrollHeight ?? 1;
        const clientHeight = divRef.nativeElement.clientHeight ?? 1;
        const scrollTop = divRef.nativeElement.scrollTop ?? 0;

        const scrolled = Math.round(
          (scrollTop / (scrollHeight - clientHeight)) * 100
        );

        divTop.nativeElement.innerText = `Scroll: ${scrolled}%`;
      },
    });
  }
}

One question I was wondering about is whether to use afterEveryRender or maybe reach out to Renderer2 and apply listeners? The example with scroll percentage could be adjusted to use Renderer2 as follows:

@Component({
  selector: 'app-scroll-tracker',
  standalone: true,
  template: `
    <p>Scroll: {{ scrollPercent() }}%</p>

    <div #divRef style="height: 200px; overflow-y: scroll;">
      <div style="height: 1000px; background: lightblue"></div>
    </div>
  `,
})
export class ScrollTrackerComponent {
  private readonly renderer = inject(Renderer2);
  private readonly destroyRef = inject(DestroyRef);
  
  readonly divRef = viewChild<ElementRef<HTMLDivElement>>('divRef');
  readonly scrollPercent = signal(0);

  // reference to the listener to destroy it with the component
  private removeListener?: () => void;

  constructor() {
    afterNextRender({
      earlyRead: () => this.divRef()?.nativeElement,
      write: (box) => {
        if (!box) {
          return;
        }

        this.removeListener = this.renderer.listen(box, 'scroll', () => {
          const percent = Math.round(
            (box.scrollTop / (box.scrollHeight - box.clientHeight)) * 100
          );
          this.scrollPercent.set(percent);
        });
      },
    });

    // destroy listener with the component
    this.destroyRef.onDestroy(() => {
      this.removeListener?.();
    });
  }
}
afterRenderEffect, afterNextRender, afterEveryRender & Renderer2
Scroll Counter Attached Using Renderer2

Seems like there is always more than one solution for a problem. My understanding on the difference between these two is that:

  • Renderer2 is used when we want to attach some listeners or render on the DOM
  • afterEveryRender is used when we want to fire a callback once DOM painting is done

When is comes to the scroll example, the Renderer example makes more sense, since we track the scroll position (an event based behavior), however if we were want to scroll to the bottom of the page every time a new information is presented on the screen, in that case, afterEveryRender would be a more preferable option.

Summary

All in all, these tools don’t replace each other, but complement different needs. If your logic is purely reactive state, reach for effect. If it depends on the DOM, use afterRenderEffect. For one-time DOM adjustments after the first paint, use afterNextRender. If your logic needs to be executed on every DOM re-render, use afterEveryRender. Finally, Renderer2 is a universal way to attach listeners and manipulate the DOM without risking SSR crashes.

Hope you liked the article and I was able to provide some explanation on these functions. They were causing some confusion. at least for me, hence I decided to go deeper with them and try to explain them with my own words. Feel free to share your thoughts, catch more of my articles on dev.to, connect with me on LinkedIn or check my Personal Website.


afterRenderEffect, afterNextRender, afterEveryRender & Renderer2
]]>
<![CDATA[Migrating to Angular Signals]]>Today, everyone, everywhere in the Angular community is talking about Signals. They are the new way to manage state and reactivity in Angular applications, promising a better way to build logic and a very much improved change detection.

However, many are hesitant. And this is understandable! Not only are signals

]]>
https://www.angularspace.com/migrating-to-angular-signals/6889e9687036af0001da45caWed, 27 Aug 2025 10:43:56 GMT

Today, everyone, everywhere in the Angular community is talking about Signals. They are the new way to manage state and reactivity in Angular applications, promising a better way to build logic and a very much improved change detection.

However, many are hesitant. And this is understandable! Not only are signals new, but they also require a somewhat different way of thinking as opposed to the common way of dealing with reactivity (without RxJS). And even with RxJS, many question need our attention before we cna actually move in and start using signals in our day-to-day operations.

So, what are those questions that we aim to answer with this article? Let's take a look

  1. Are signals going to replace RxJS? Is RxJS "dead"?
  2. Should I migrate to signals? What are the benefits?
  3. If so, how should I migrate? This feels overwhelming!
  4. If I do, when should I use signals and when RxJS?

Let's go one by one and try to answer these questions, and, hopefully, learn something new along the way.

Are signals going to replace RxJS?

This is probably the most common question everyone is asking nowadays. WHich is understandable, given both approaches kind of solve the same problems (state management and reactivity) and are both used in Angular applications.

To answer this, let us begin with the official position of the Angular team: RxJS is going to be optional. Now, optional is a specific word. It means it will not be required, but it will definitely be supported. So what does that mean in practice?

Well, materially, the actual support for RxJS improved with the latest version. For example, we go the takeUntilDestroyed custom operator to help us unsubscribe easier from our Observables. On top of that, we now have the @angular/core/rxjs-interop package that also allows us to play signals together with RxJS. So, the correct answer here is no, RxJS is not going to die.

Moreover, this year a big step has been taken to make Observables native in browsers: Chromium-based browsers now support an experimental, native version of Observables. This only can mean that in the future reactive extensions (even if not in the form of RxJS) will become more ubiquitous, not less.

So, this is more or less the answer to our first question: no, RxJS is not dead, and is not planning to become dead anytime soon, and when it comes to Angular, the support for it has improved, it is just that it will be optional and many tasks can be completed with signals instead. Which naturally brings us to the next question.

Should I migrate to signals?

Now this one is a bit tricky. Signals are way simpler as opposed to RxJS, but also have a learning curve, so we must justify moving away (in some cases!) from RxJS. Let's shortly recap what benefits signals bring to the table.

  1. Synchronous: Observables can be asynchronous, which can add to the overall confusion. Signals, on the other hand, are guaranteed to be synchronous, meaning no race conditions and other unpleasant surprises. (of course this also means they are unsuited for dealing with async things directly, but we are listing benefits here don't we?)
  2. Always available value: with Observables, we need to subscribe in order to be able to read the latest "value", we can always simply call them (mySignal()) and get the current value immediately.
  3. Simplicity: In RxJS, we have multiple powerful operators and other concepts such Schedulers, hot and cold Observables, Subjects, etc. Signals have a very thin API layer, providing the bare minimum building blocks to handle everything we need in terms of
  4. Built for Angular: while we discussed the improved RxJS interoperability in Angular, signals are built specifically for Angular, meaning they are designed to work with the framework's change detection seamlessly. Updating a signal's value already triggers change detection, making them essential for Zoneless Angular apps.

Now, if this got us convinced that we want, in fact, to migrate to signals, we can move to the next question: how do we do that?

How should I migrate to signals?

To correctly address this question, we need to first understand that this migration won't happen overnight (unless we have a really tiny app). However, we also can realize that changing one component property from an Observable (or even better, a conventional property like a string or boolean) to a signal is not that hard, and, crucially, most likely won't affect the overall component logic.

This gives us the understanding that we can start migrating to signals in an incremental, one-by-one fashion. Let's now outline the important steps and then deconstruct all of those steps further.

  1. Easy steps: migrating input/output properties and view/content queries
  2. Migrating simple properties: primitive values can be dealt with here and there
  3. Migrating object properties: this is where things can get a bit more complicated, but still manageable
  4. Migrating BehaviorSubjects: also can be done easily, but a degree of care should be shown
  5. Migrating other Observables: really depends on the case, and mostly involves removing async pipes in favor of toSignal, rather than actually changing the Observable in question to a signal

So, let's discuss all these points!

Migrating inputs/outputs and view/content queries

As we known, in recent versions, Angular has introduced the input and output functions, which replace the previous @Input and @Output decorators. The input function in particular produces a signal, so it can be used with all sorts of building blocks like computed, effect and now also linkedSignal.

So, for instance, if we have such a component:

@Component({
  selector: 'my-dialog',
  template: `...`,
})
export class MyDialogComponent {
  @Input() open: boolean;
  @Output() close = new EventEmitter<void>();
}

We can now safely migrate it to signals like this:

@Component({
  selector: 'my-dialog',
  template: `...`,
})
export class MyDialogComponent {
  open = input.required<boolean>();
  close = output<void>();
}

Now, we are not going to dive into all the intricacies of having inputs and outputs as signals, so you can consult the official documentation here and here for that.

However, what actually concerns us, is the big scale of such a migration: in a decent enterprise-grade app we might very well have hundreds of component with thousands of inputs that would be almost impossible to convert manually.

Thankfully, the Angular team got us covered (as much as possible), with a special migration schematic we can run and easily convert most (if not all) of our inputs and outputs. We can simply run

ng generate @angular/core:signal-input-migration

And all of the inputs/outputs that are safe to convert will be automatically converted, and if they are referenced somewhere, the reference would be updated to actually call the signal (reading its value), even in templates and host bindings.

Now let us discuss what "safe to convert" means. For instance, the schematic might not convert inputs that are being re-set in the component, as signal inputs are immutable, so we cannot change their value after the initial assignment, only parent components can. This is a good thing, as it helps us avoid bugs, but it also means that we need to be careful when migrating such inputs:

@Component({
  selector: 'my-dialog',
  template: `...`,
})
export class MyDialogComponent {
  @Input() open: boolean;
  @Output() close = new EventEmitter<void>();

  closeDialog() {
    this.open = false; // this will not work if input is a signal
    this.close.emit();
  }
}

There's a way to force the schematic to do a bit more and convert slightly unsafe inputs:

ng generate @angular/core:signal-input-migration --best-effort-mode

However, you should be careful with this, as it could break your build, so always double-check.

Now, after doing the migration, we need a way to circle back later and manually migrate inputs that were not converted. For this, when running the migration, we can use an --insert-todos flag, which will insert TODO comments in the code where manual migration is required. This way, we can easily find those places later and fix them. For example:

@Component({
  selector: 'my-dialog',
  template: `...`,
})
export class MyDialogComponent {
  // TODO: Skipped for migration because:
  //  Your application code writes to the input. This prevents migration.
  @Input() open: boolean;
  close = output<void>();

  closeDialog() {
    this.open = false; // this prevented it
    this.close.emit();
  }
}

Another small case could be if we use getters as means to read inputs and do some side-effects or convert to another value. Here, we would be forced to also come in manually and change those to either be an effect or computed` instead in the future.

Next, if we have a truly huge application we might consider doing even the automatic migration incrementally; for this, we cna provide a path to the schematic and only convert a part of our app at a time, test it, ship it, then come back for the next slice:

ng generate @angular/core:signal-input-migration --path=src/app/some/feature

It is worth noting, that even the manual migration does not have to be painful: there's a VSCode code refactor action available that can convert an @Input or @Output to a signal, after which we can manually hone any remaining edge cases.

Migrating to Angular Signals

Everything we mentioned here is actually the worst case scenario; we can run a migration for outputs separately, which is way safer:

ng generate @angular/core:signal-output-migration

Since it is safe, it does not include a --best-effort-mode flag, and it does not insert TODOs, as it is guaranteed to work. It also handles cases where the legacy EventEmitter's next method is used, and removes calls to EventEmitter.complete, since it is unnecessary.

Finally, we must say that we can also run a migration schematic to change view and content queries to signals, which might also be unsafe, so --best-effort-mode is available here as well:

ng generate @angular/core:signal-queries-migration

This will do! When we have our inputs, outputs and queries migrated, we can move to the next step.

Migrating simple properties

Now, when we have inputs and outputs and other things that might naturally be signals, we now arrive at a territory where we need to go manual, however still keeping things relatively simple. Imagine we have a property in our component that is a simple primitive value, like a string or boolean. We can simply change it to a signal, and then use it as such:

@Component({
  selector: 'some-component',
  template: `
    <app-dialog [open]="open"/>
    <button (click)="toggleDialog()">Toggle Dialog</button>
  `,
})
export class MyComponent {
  open = false;

  toggleDialog() {
    this.open = !this.open;
  }
}

Now, such a signal can easily be converted to a signal, and we can use it in the template as well:

@Component({
  selector: 'some-component',
  template: `
    <app-dialog [open]="open()"/>
    <button (click)="toggleDialog()">Toggle Dialog</button>
  `,
})
export class MyComponent {
  open = signal(false);

  toggleDialog() {
    this.open.update((value) => !value);
  }
}

This is very straightforward! Basically, we have to keep several simple things in mind:

  1. Change a property to be a signal of a value instead of the value itself ('true' -> 'signal(true)')
  2. Change update logic from simple assignment to using the update or set methods of the signal:
    • this.open = !this.open -> this.open.update((value) => !value)
  3. Do not forget to call the signal in the template as a function: open -> open()
  4. If your signal is bound to [(ngModel)], do not call it in the binding, just use it as is: [(ngModel)]="open" -> [(ngModel)]="open"

This was easy. Now, doing the same with complex values (like deeply nested objects and arrays) can be a bit trickier.

Migrating complex properties

Now, with complex properties as signals, the manual steps are still the same, we just keep in mind that usually we would mostly use .update instead of .set (although not exclusively), as it is convenient when dealing with properties, like adding an item to an array:

arraySignal.update((array) => [...array, newItem]);

Now, what can become frustrating here is using these methods with objects of considerable size and depth. Imagine we have a state signal that contains various items, one of which is a product object, which has a orderHistory array, which contains order objects, which have a quantity property. Now imagine updating that property for the third order:

state.update((state) => {
  return ({
    ...state,
    product: {
      ...state.product,
      orderHistory: state.product.orderHistory.map((order, index) => {
        if (index === 2) { // third order
          return {
            ...order,
            quantity: order.quantity + 1 // increment quantity
          };
        }
        return order; // return other orders unchanged
      }),
    }
  })
});

Looks ugly and confusing. Angular itself does not provide a way to do this, but we can use a library like immer to help us with this. Immer simplifies working with immutable data structures by letting us write code that looks like we're directly modifying an object, array, etc, but behind the scenes, it takes these "mutations" and efficiently produces a brand-new, immutable version of our data without affecting the original object. This means we get the benefits of immutability, such as a more predictable state, without the boilerplate of manually copying and updating nested data structures. It allows us to simple do the following:

import { produce } from 'immer';

state.update((state) => {
  return produce(state, (draft) => {
    draft.product.orderHistory[2].quantity += 1; // increment quantity
  });
});

This quickly becomes very reasonable. Another concern with more complicated properties is that they might be implicitly tied to other properties without that process being handled reactively. For instance, consider the following code:

@Component({
  selector: 'some-component',
  template: `
    <select [(ngModel)]="selectedOption>
      @for (option of options; track option.id) {
        <option [value]="option.id">{{ option.name }}</option>
      }
    </select>
  `,
})
export class SomeComponent implements OnChanges {
  @Input({required: true}) options: {id: number, name: string}[];
  selectedOption = this.options[0]; // default selection

  ngOnChanges(changes: SimpleChanges) {
    if (changes['options']) {
      this.selectedOption = this.options[0]; // reset selection if options change
    }
  }
}

Now, we can see that the selectedOption is implicitly tied to the options input, and if we change the options, we need to reset the selection. This is achieved in a bit of an ugly way here, using the ngOnChanges lifecycle hook as an intermediary between a dependent state and its source value.

With signals, if we convert the options input to an input signal, we can use linkedSignal to reset the option in a reactive manner:

@Component({
  selector: 'some-component',
  template: `
    <select [(ngModel)]="selectedOption">
      @for (option of options(); track option.id) {
        <option [value]="option.id">{{ option.name }}</option>
      }
    </select>
  `,
})
export class SomeComponent {
  options = input.required<{id: number, name: string}[]>();
  selectedOption = linkedSignal({
    source: this.options,
    computation: (options) => options[0]
  }); // reset to first option

  // no need for ngOnChanges anymore
}

As we can see, our component code got only simpler with this, but this also implies some big things: when we start changing our more complex state properties, we may need to also alter our component code, and while those alterations will only be for the better, it is still time-consuming and may even be challenging from time to time.

So, to recap this section, here are steps we need to take when converting complex properties to signals:

  1. Change a property to be a signal of a value instead of the value itself ('true' -> 'signal(true)')
  2. Be careful to call the signal in the template as a function: open -> open() whenever necessary
  3. Be careful not to confuse when we do not need to call the signal in the template, like with [(ngModel)]
  4. Use .update instead of .set when dealing with complex properties or when you need the previous value
  5. Consider using a library like Immer to help with very complex updates
  6. Check our state properties for implicit dependencies and use linkedSignal or computed to handle them reactively
  7. Do all of those steps in an incremental fashion, checking and testing along the way

Now, that we got inputs and state properties out of our way, we can focus on the real heavy-weight: RxJS and signal interoperation.

Migrating BehaviorSubjects

Let's start this section with the most common and easy conversions: BehaviorSubjects. While they're probably not the most common Observables in Angular codebases, they are certainly the easiest to transform to signals.

To understand this, let's quickly remind ourselves what a BehaviorSubject is:

  • it is an Observable that always has a value,
  • we can read that value at any time.
  • we can subscribe to it and perform some side effects when the value changes.

This sounds a lot like a signal, doesn't it? In fact, we can convert a BehaviorSubject to a signal in a very straightforward way:

@Component({
  selector: 'some-component',
  template: `
    <p>Current value: {{ value$ | async }}</p>
    <button (click)="increment()">Increment</button>
  `,
})
export class SomeComponent {
  private value$ = new BehaviorSubject<number>(0);

  constructor() {
    this.value$.subscribe(value => {
      console.log('Value changed:', value);
    });
  }

  increment() {
    this.value$.next(this.value$.getValue() + 1);
  }
}

Now, this BehaviorSubject can be converted to a signal like this:

@Component({
  selector: 'some-component',
  template: `
    <p>Current value: {{ value() }}</p>
    <button (click)="increment()">Increment</button>
  `,
})
export class SomeComponent {
  private value = signal(0);

  constructor() {
    effect(() => {
      console.log('Value changed:', this.value());
    });
  }

  increment() {
    this.value.update(v => v + 1);
  }
}

Now, the steps are simple:

  1. Change the BehaviorSubject to a signal of the same type: new BehaviorSubject<number>(0) -> signal(0).
  2. Change the template to call the signal as a function: value$ | async -> value().
  3. Update the value updating logic to use the signal API: this.value$.next(this.value() + 1) -> this.value.update(v => v + 1).
  4. Change the subscription to an effect: this.value$.subscribe(...) -> effect(() => {...}).

And this is it... for 95% of cases. However, if we are using some operators, especially asynchronous ones, we might need to do a bit more work. For instance, we might want to debounce emissions before we log new values if the user clicks the "Increment" button too fast:

@Component({
  selector: 'some-component',
  template: `
    <p>Current value: {{ value$ | async }}</p>
    <button (click)="increment()">Increment</button>
  `,
})
export class SomeComponent {
  private value$ = new BehaviorSubject<number>(0);

  constructor() {
    this.value$.pipe(
      debounceTime(300) // wait for 300ms before emitting
      takeUntilDestroyed(),
    ).subscribe(value => {
      console.log('Value changed:', value);
    });
  }

  increment() {
    this.value$.next(this.value$.getValue() + 1);
  }
}

Now, let's move on to talk about converting RxJS Observables in general to signals and how to deal with operators we use.

Migrating other Observables

Let's continue with the previous example, and discuss two options we have before we move to chose one of them

  • Keep the BehaviorSubject as is, and use toSignal to convert it to a signal when we need it.
  • Convert the BehaviorSubject to a signal immediately and use toObservable whenever you need to use RxJS operators

Both options are more or less solid, however the second one has a bit of an edge over the first one, since we would love to have as many signals everywhere we need reactive state, and it is also easier to access in the template or elsewhere (someSignal() vs someBehaviorSubject$ | async and someBehaviorSubject$.getValue()).

So, let's reimagine our previous example while utilizing the toObservable operator to switch to RxJS in order to use operators like debounceTime:

@Component({
  selector: 'some-component',
  template: `
    <p>Current value: {{ value() }}</p>
    <button (click)="increment()">Increment</button>
  `,
})
export class SomeComponent {
  private value = signal(0);

  constructor() {
    toObservable(this.value).pipe(
      debounceTime(300),
      takeUntilDestroyed(),
    ).subscribe(value => {
      console.log('Value changed:', value);
    });
  }

  increment() {
    this.value.update(v => v + 1);
  }
}

Now, as we explored all the cases with BehaviorSubject, let's talk about Observables in general.

Of course, the tool that is the easies to reach out to is the `toSignal operator, however, let's be careful and realize we might not need it all the time. There are two glaring examples, the first is when we are using NgRx (or a similar state-management library that deals with Observables) and the second is when we are using an Observable of an HTTP request.

In the case of NgRx, the lib itself provides a way to get signals instead of Observables, so we can use the selectSignal method to retrieve a signal of the state from a store selector:

@Component({
  selector: 'some-component',
  template: `
    <p>Current value: {{ value() }}</p>
    <button (click)="increment()">Increment</button>
  `,
})
export class SomeComponent {
  readonly #store = inject(Store); 
  value = this.store.selectSignal(selectValue); // selectSignal returns a signal

  constructor(private store: Store<State>) {
    effect(() => {
      console.log('Value changed:', this.value());
    });
  }

  increment() {
    this.store.dispatch(incrementValue());
  }
}

Great! No need to bother with toSignal, just use the signal straight away.

Now, if we have an HTTP request, Angular now provides a separate reactive primitive, the httpResource, which handles the entire lifecycle of an HTTP request, like loading state, errors and so on:

@Component({
  selector: 'some-component',
  template: `
    @if (resource.isLoading()) {
      <p>Loading...</p>
    } @else if (resource.hasError()) {
      <p>Error: {{ resource.error() }}</p>
    } @else {
      <p>Data: {{ resource.data() | json }}</p>
    }
  `,
})
export class SomeComponent {
  readonly resource = httpResource(this.#http.get('/api/data'));
}

As we can see, this is super simple, and we are dealing exclusively with signals, no subscriptions, no async pipes, just signals. You can read more about resources in one of my previous articles.

Note: httpResource currently is meant to work only for GET requests, and while you can force it to use a differernt method like POST, it is heavily discouraged, so, for now, we need to find other solutions for the rest of HTTP verbs, as using toSignal and so on

So, when do we use toSignal then? Well, let's finally move to the last question we ai mto answer with this article to learn about that.

When should I use signals and when RxJS?

As we noticed throughout the article, signals were usually used as a synchronous but reactive storage for some values, while RxJS was used to handle asynchronous streams of events.

This distinction is very important and useful: always think of signals as data (or state, as we call it), and RxJS as events. This will surely help us correctly identify when to use which.

For instance, consider this:

fromEvent(document, 'click').pipe(
  map((event: MouseEvent) => event.clientX),
  takeUntilDestroyed(),
).subscribe((x) => {
  console.log('Mouse clicked at X:', x);
});

Here, we focus on user generated events and which are inherently asynchronous. Also, there's no concept of "state" or "data" here; we are simply reacting to events. So, this is a perfect use case for RxJS. Of course, we could use the toSignal function here, but what would that signal even be? It will always store the latest click event object, but is that even a useful semantic category? Not really, so we can safely disregard it and just use RxJS here.

However, consider a scenario where we are using some API that returns a state in the form of an Observable (for instance, what NgRx would be if not for the selectSignal method). In this case, we are dealing with a state that is updated over time, and we can use the toSignal operator to convert it to a signal:

this.state$ = toSignal(
  this.store.select(selectState),
  {defaultValue: someDefaultValue}
);

So, we can always use this reasoning to determine whether we need to use signals or RxJS.

Now, one burning scenario might be the one we already discussed: we have a reactive value, not a stream of events, but we really want to apply some (async) RxJS operators to it, like debouncing.

In this case, as already mentioned, it would still be wise to have the value as a signal, since it's simpler to deal with, and then, in some place, convert it to an Observable, apply all the operators you like, and then subscribe to it. This way, we can still use the signal as a source of truth, but also add RxJS niceties to the mix.

Conclusion

Converting a huge Angular app to exclusively use signals is challenging, hard, and daunting task. But of course, it is more than achievable, and, as we saw in this article, can be done in a harmless, incremental way. While you will certainly encounter some complex situations along the way, hopefully this piece will help guide you along reaching your reactivity goals!

Small Promotion

Migrating to Angular Signals
My book, Modern Angular, is now in print! I spent a lot of time writing about every single new Angular feature from v12-v18, including enhanced dependency injection, RxJS interop, Signals, SSR, Zoneless, and way more.

If you work with a legacy project, I believe my book will be useful to you in catching up with everything new and exciting that our favorite framework has to offer. Check it out here: https://www.manning.com/books/modern-angular

P.S If you want to learn more about RxJS interoperability in Angular, or dive deep into signals, check out the 5th, 6th and 7th chapters of my book ;)


Migrating to Angular Signals
]]>
<![CDATA[First Angular Space Meetup Date!!]]>Finally!!! First Angular Space Meetup Date!!

Signals are rewriting how we think about reactivity in Angular
→ From v16 signal/computed/effect
→ to stabilized resources in v20
→ to upcoming forms & router improvements… the journey has just begun.

That’s why our first Angular Space Meetup

]]>
https://www.angularspace.com/first-angular-space-meetup-date/68a5e198ec11c5000186a8c9Wed, 20 Aug 2025 14:58:46 GMT

Finally!!! First Angular Space Meetup Date!!

Signals are rewriting how we think about reactivity in Angular
→ From v16 signal/computed/effect
→ to stabilized resources in v20
→ to upcoming forms & router improvements… the journey has just begun.

That’s why our first Angular Space Meetup is all about “The Future of Signals.”

📅 August 27, 5 PM UTC

🎙 Host: Armen Vardanyan – Angular GDE, author of Modern Angular, One of the Top Angular Space Authors

🎤 Guest: Michael Small – Frontend Lead at Relationship One, contributor to ngrx-toolkit & creator of allEventsSignal in ngxtension

We’ll cover:
✅ Signals in greenfield vs existing projects
✅ Libraries embracing signals
✅ RxJS vs Signals tradeoffs & interop experiences
✅ Migration strategies & caveats
✅ Where signals fit best in production

🔴 Live on X, YouTube, Twitch & LinkedIn!
Organized by Angular Space 🌌

For Live Stream on X visit my profile
https://x.com/DanielGlejzner

For Live Stream on LinkedIn visit my profile
https://linkedin.com/in/daniel-glejzner-271281159/

For Live Stream on YouTube
https://youtube.com/@AngularSpaceMeetup

For Live Stream on Twitch
https://twitch.tv/angularspacemeetup

]]>
<![CDATA[5 TypeScript Utility Types You Can't Live Without]]>

I decided to share a collection of custom utility types that I use in my daily work. Maybe you'll find them useful, maybe not, but it's worth knowing that creating custom types really gets the job done, especially when building strongly-typed and safe code. Perhaps it

]]>
https://www.angularspace.com/5-typescript-utility-types-you-cant-live-without/6889b4017036af0001da459bMon, 18 Aug 2025 06:51:44 GMT5 TypeScript Utility Types You Can't Live Without

I decided to share a collection of custom utility types that I use in my daily work. Maybe you'll find them useful, maybe not, but it's worth knowing that creating custom types really gets the job done, especially when building strongly-typed and safe code. Perhaps it will inspire you to create your own types that solve the problems you encounter every day?

Let's dive in!

1. Handling State Transitions with Process

Ever found yourself doing something like this?

type State = {
  loading: boolean;
  error: string | null;
  data: User | null;
};

const state: State = {
  loading: true,
  error: "",
  data: null,
};

If you do this, you're just hurting yourself, because later your code looks like this (and that's just the beginning) - I talk more about this in my article Exhaustiveness Checking And Discriminant Property: The Complete Guide.

if (loading && !error && data) {}

The discriminant property (using Discriminated Unions) comes to the rescue.

type State =
  | { status: "idle" }
  | { status: "busy" }
  | { status: "ok"; data: User }
  | { status: "fail"; error: string };

const state: State = { status: "idle" };

This allows us to directly and quickly determine which "variant" of the state we're dealing with. If it's the ok variant, we can access the data field, and if it's "fail", we can access the error field. For the other two, there's no such option. This greatly simplifies the code and eliminates extra "nulls".

if (state.status === "ok") {
   // accessing data is safe here!
}

However, repeating this structure everywhere generates a lot of duplication, and there are many states like "fetch, show, and handle error" or "save and show message". A custom Process<TData, TError> type can help with this.

// utility-types.ts

/**
 * Represents the state of an asynchronous process.
 * Defaults TData to void (no data) and TError to Error.
 */
type Process<TData = void, TError = Error, TSkipIdle = false> =
  | (TSkipIdle extends false ? { status: "idle" } : never)
  | { status: "busy" }
  // If TData is void, 'data' property is omitted
  | (TData extends void ? { status: "ok" } : { status: "ok"; data: TData })
  // If TError is void, 'error' property is omitted
  | (TError extends void
      ? { status: "fail" }
      : { status: "fail"; error: TError });


// Usage Examples:
const state: Process<User> = { status: "idle" };

// Success case
const successState: Process<User> = { status: 'ok', data: { id: '1', name: 'Test' } };

// Failure case with default Error type
const errorState: Process<User> = { status: 'fail', error: new Error('Failed to fetch') };

// "idle" removed because we don't need it in this specific case
const stateWithoutIdle: Process<Comment, Error, true> = { status: 'busy' };

Notice that we also have the option to remove the 'idle' status using TSkipIdle. Sometimes the flow is just busy -> ok -> fail because we don't want an "idle" status for a "get" operation. This would generate extra code and increase cyclomatic complexity, as our application simply doesn't need that status in such cases. Maybe you're loading data immediately when a component is mounted, so there's no need to call an additional state update just to switch from idle to busy.

It can be used in this way as a complete example to demonstrate its power.

type State = Process<User>

let state: State = { status: "idle" };

const getUser = async (userId: string) => {
  state = { status: "busy" };

  try {
    const user = await apiCallToUser(userId);
    state = { status: "ok", data: user };
  } catch (error) {
    state = { status: "fail", error: "Something went wrong" };
  }
};

Instead of something like this.

type State = {
  loading: boolean;
  error: string | null;
  data: User | null;
};

let state: State = {
  loading: true,
  error: "",
  data: null,
};

const getUser = async (userId: string) => {
  state = { loading: true, error: null, data: null };

  try {
    const user = await apiCallToUser(userId);
    state = { loading: false, error: null, data: user };
  } catch (error) {
    state = { loading: false, error: "Ups", data: null };
  }
};

Both the read and update logic are cleaner and less complex in terms of cyclomatic complexity. There are four possible states instead of 2 (loading) × 2 (error) × 2 (data) = 8 combinations. There's also no risk of mistakenly updating the state (e.g., forgetting to set the loading flag to false and ending up with an infinite loader). A bit cleaner, right?

As I mentioned before, this is just one use case for this utility type. A full explanation of the problems you may encounter when defining your state variants with flags is available in the Exhaustiveness Checking and Discriminant Property: The Complete Guide article.

2. Handling API Communication with Result

When working with fetch or axios, if a promise is rejected and you forget to add a .catch() block, you'll end up with an unhandled rejection. Sometimes, an API might even return a different data shape for errors instead of rejecting the promise, which complicates handling.

Of course, this is perfectly valid behavior, but what if we could make it simpler? What if we had a type that models this common scenario more elegantly? For instance: { status: "aborted" } | { status: "fail" } | { status: "ok" }. It could look like this:

type Result<TData> =
  | { status: 'aborted' }
  | { status: 'fail'; error: unknown }
  | {
      status: 'ok';
      data: TData;
    };

// It returns Result<User>
const result = await service<User>('https://api');

if (result.status === 'aborted') return;
if (result.status === 'fail') {
  alert('Oops! ' + result.error);
  return;
}
if (result.status === 'ok') {
  alert('WORKS!');
  return;
}

// This final part serves as a safeguard. It uses TypeScript's exhaustiveness
// checking to ensure that every possible status is handled. If you were to
// comment out one of the `if` blocks above, TypeScript would throw an error on
// the following line, because the `result` variable could still hold a value
// (e.g., { status: 'aborted' }) that cannot be assigned to the `never` type.
const _exhaustiveCheck: never = result;

// This trick ensures at compile time that all cases are handled.

I've omitted the implementation of service because it may look different in the project context. Sometimes it may be more "generic", and sometimes it may be more connected to the project domain. In addition, this article is about types, not runtime. So, implementing such a service may be a good exercise for you.

Especially in React, this approach to modeling API responses is very helpful. You can easily abort stuff in useEffect, and automatically return void 0 to avoid calling set-state operations or anything else when the hook is unmounted or a dependency in the array has changed.

3. Avoid Primitive Obsession with Brand

Primitive obsession is a code smell where developers use simple primitive types, like string or number, to represent more complex, domain-specific concepts. This approach is often too generic and can lead to subtle bugs, like the one shown below:

const getUserPosts = (userId: string) => {
   // Logic that queries for users
}
const documentId = 'doc-xyz-123'; // also a string
// TypeScript sees no issue here, but it's a nasty bug!
getUserPosts(documentId);

We can prevent this by creating a branded type, which blocks such invalid assignments unless an explicit type cast is performed.

type Brand<TData, TLabel extends string> = TData & { __brand: TLabel };

type UserId = Brand<string, 'UserId'>;

This works by creating an intersection type that combines the primitive (e.g., string) with a unique, phantom property like __brand. A regular string lacks this property, so TypeScript considers it a different type, all without adding any runtime overhead as the brand is erased during compilation.

const getUserPosts = (userId: UserId) => {
   // ...
}
const documentId = 'doc-xyz-123'; // This is just a plain string
// Now TypeScript throws an error 💢, preventing the nasty bug.
// Error: Argument of type 'string' is not assignable to parameter of type 'UserId'.
getUserPosts(documentId);

// To create a UserId, you must explicitly cast it (ideally within a validation function):
const userId = "user-456" as UserId;
getUserPosts(userId); // OK

4. Make Your Types Beautiful with Prettify

Have you ever created a complex type using TypeScript's utility types like Pick, Omit, or intersections (&), only to hover over it and see a long, confusing definition in your editor? Instead of a clean, flat object, TypeScript often shows you the entire formula used to create the type. This technique, popularized by Matt Pocock, solves that.

For instance, a complex, computed type might look like this in your editor's IntelliSense tooltip:

5 TypeScript Utility Types You Can't Live Without

With the Prettify utility, we can make it look clean and readable.

type Prettify<TObject> = {
  [Key in keyof TObject]: TObject[Key];
} & {};

This simple utility type works by iterating over all the properties of the input object (TObject) and explicitly mapping them into a new object structure. The & {} at the end is a clever trick that forces TypeScript to evaluate this new structure and display the flattened, final object type instead of the underlying complex one.

Now, when you apply Prettify, you get a much nicer-looking type definition:

5 TypeScript Utility Types You Can't Live Without
Don't spam it everywhere now. Just use it in cases where working with types is really hard :D.

5. Type Your Routes Safely with StrictURL

Manually building URLs with string concatenation ('/users/' + userId) is fragile and prone to runtime errors. We can enforce a correct URL structure at compile time using a StrictURL utility type, which leverages some of TypeScript's more advanced features to create fully type-safe URLs.

// Recursively joins path segments with a "/". Does not add a trailing slash.
type ToPath<TItems extends string[]> = TItems extends [
  infer Head extends string,
  ...infer Tail extends string[],
]
  ? `${Head}${Tail extends [] ? '' : `/${ToPath<Tail>}`}`
  : '';

// Recursively builds a query string from parameter names
type ToQueryString<TParams extends string[]> = TParams extends [
  infer Head extends string,
  ...infer Tail extends string[],
]
  ? `${Head}=${string}${Tail extends [] ? '' : '&'}${ToQueryString<Tail>}`
  : '';

// The main utility to construct the full URL type
type StrictURL<
  TProtocol extends 'https' | 'http',
  TDomain extends `${string}.${'com' | 'dev' | 'io'}`,
  TPath extends string[] = [],
  TParams extends string[] = [],
> = `${TProtocol}://${TDomain}${TPath extends []
  ? ''
  : `/${ToPath<TPath>}`}${TParams extends []
  ? ''
  : `?${ToQueryString<TParams>}`}`;

Breaking Down the Magic

This utility looks complex, but its power comes from two key concepts working together:

  1. The infer Keyword: This is the core of the trick. infer allows us to declare a variable for a type inside a conditional check. In [infer Head, ...infer Tail], TypeScript captures the type of the first array element into a new type variable Head and the type of the rest of the array into Tail. It's essentially destructuring for types.
  2. Recursive Conditional Types: The type calls itself with the Tail of the array. This creates a "loop" that processes each segment of the path or query string one by one. The loop stops when the array is empty (the "base case"), at which point it returns an empty string, and the results are combined into the final string literal type.

In short, ToPath recursively pulls the Head off the array, appends a slash if there is more to process, and processes the Tail, until nothing is left. ToQueryString does the same but formats the output for URL parameters.

Usage Examples in Action

This allows TypeScript to compute the final URL structure, giving you incredible autocompletion and error-checking.

// A route with a dynamic segment
type HomeRoute = StrictURL<'https', 'polubinski.io', ['articles', string, 'id']>;
// Hovering shows: `https://polubinski.io/articles/${string}/id`

// A route with query parameters
type SearchRoute = StrictURL<'https', 'google.com', ['search'], ['q', 'source']>;
// Hovering shows: `https://google.com/search?q=${string}&source=${string}`

This pattern is used by modern frameworks like Next.js to provide type-safe routing out of the box, preventing an entire class of bugs.

Summary

I have many more such beauties, but I'll save them for future articles. The last one was quite complex, but to be honest, it nicely fits into the strict and type-safe direction the industry is heading. AI tools can help craft these advanced utilities. In turn, these strong types create a stricter codebase, making it easier for both human developers and AI assistants to find and fix problems.


5 TypeScript Utility Types You Can't Live Without

5 TypeScript Utility Types You Can't Live Without
]]>
<![CDATA[Senior Angular Interview Questions]]>Lately, I've been interviewing candidates for a Senior Angular Developer role, and I've ended up rejecting the majority of them. Am I a jerk for that? Maybe. But here's what I've realized from these interviews: we live in a bubble.

Most of

]]>
https://www.angularspace.com/senior-angular-interview-questions/6885d21139c1050001c1ee7eTue, 29 Jul 2025 07:37:37 GMT

Lately, I've been interviewing candidates for a Senior Angular Developer role, and I've ended up rejecting the majority of them. Am I a jerk for that? Maybe. But here's what I've realized from these interviews: we live in a bubble.

Most of us spend our day-to-day work calling APIs, caching data, and displaying it to users. We do this for years, and it leads us to believe we've mastered frontend development, especially a framework like Angular.

With this article, I want to walk through some of the technical questions I like to ask during interviews, all while exploring a deeper thought: "Am I really a senior developer, or do I just think I am?"

Before diving into the questions and sample answers, there's a broader discussion worth mentioning: Who's more valuable - a generalist or a specialist?

Someone who's worked across many domains (CI/CD, backend, databases, frontend, etc.), or someone who's deeply specialized in a single technology and only has surface level knowledge of the rest? That's a question only the interviewer can answer, based on the needs of the team. In my case, I was always looking for a specialist, hence following list of Angular focused interview questions. Finally, this is my personal list, feel free to critique or customize it for your own hiring process.

  • General Questions:
    • What does it mean to be a senior developer ?
    • Do you prefer declarative or imperative programming?
    • Would you use a state management library or a custom implementation?
  • Angular Questions - General:
    • How would you achieve a parent - child component communication ?
    • What is the role of NgZone in Angular, and when would you opt out of Angular's change detection?
    • What is and when to use an Injection Token ?
    • What are resolution modifiers and how to use them ?
    • Why would you use a track function in a for-loop and how it works ?
    • What is the the difference between providers and viewProviders ? Can you provide an example when to use either of them ?
    • Why pipes are considered safe in a template, but regular function calls (not signals) are not ?
  • Angular Questions - Signals:
    • How would you convince your team to migrate a project from Observables to signals ?
    • Can you explain the diamond problem in Observables and why it doesn't occur with signals?
    • When to use effect and untracked in a signal based application ?
    • Are life-cycle hooks still needed in a fully signal based application ?
  • Angular Questions - RxJS:
    • What is a higher-order observable and how they differ ?
    • What is the difference between share() and shareReplay() ?
    • What does this code do ? - scan() + expand()

General Questions

These are my go to questions at the very start of the interview process. There are no right or wrong answers here, what I'm really interested in is the developer's personality and thinking style. After all, this is someone who will be working closely with me and the rest of the team, and ideally, we want people who share a similar mindset or programming philosophy.

What does it mean to be a senior developer ?

This question helps me understand how I should approach the person I'm interviewing. As I mentioned earlier, there's often a pendulum swing between being a generalist and a specialist, so I like to hear how the candidate defines a senior developer. Most answers I get sound something like:

"Understanding the framework (Angular) very well and mentoring junior developers."

That's a pretty standard response, and I usually follow it up with:

"Alright, so if a senior should know the framework really well, is it okay for me to ask hard questions and expect you will be able to provide answers for them?"

What I'm really trying to grasp is how the candidate sees themselves, whether they feel they've reached a senior level and whether they've dealt with complex, challenging problems. This way, we set some expectations for the interview process, not defined by me, but by the candidate themselves.

Personally, I have a slightly broader view of what makes someone a senior developer. Yes, you should understand the framework well, but I'd also expect you to:

  • Initiate and lead technical improvements, like addressing tech debt, and be able to pitch those ideas to higher ups.
  • Push back (respectfully) on product or feature decisions that don't make sense or may harm long-term goals of the product.
  • Know how to give and receive feedback during code reviews.
  • Recognize when to Google something, when to ask a peer, and when it's worth gathering a few people for a brainstorming session.
  • Care about the product and collaborate closely with product owners.

Do you prefer declarative or imperative programming?

You might say this is a theoretical question, and it is, but I like to ask it because it reveals how candidates think about code structure and maintainability. Most candidates respond with, “I don't know the difference”.

If you Google this question, you'll see something like: “Declarative programming focuses on what needs to be done, while imperative programming focuses on how to do it”.

In Angular terms, here's one way to think about it. Imperative programming involves mutating variables in multiple places, often with manual logic to track side effects. Declarative programming, by contrast, involves defining a value or behavior in one place, often through computed, signal, or RxJS streams. I highly recommend checking out Joshua Morony's video on this topic. Here is a coding example:

// Imperative Programming
@Component({ template: ` ... ` })
export class ChildComponent {
  private readonly wsService = inject(WsService);
  private readonly apiService = inject(ApiService);

  displayedData = signal<string[]>([]);

  constructor(){
    // load existing data
    this.apiService.existingData$.subscribe((data: string[]) => {
      this.displayedData.set(data);
    });

    // listen on WS new data push
    this.wsService.newData$.subscribe((data: string) => {
      this.displayedData.update((current) => [...current, data]);
    });
  }
}
// Declarative Programming
@Component({ template: ` ... ` })
export class ChildComponent {
  private readonly wsService = inject(WsService);
  private readonly apiService = inject(ApiService);

  displayedData = toSignal(
    merge(this.apiService.existingData$, this.wsService.newData$).pipe(
      scan((acc: string[], curr: string) => [...acc, curr], [] as string[])
    ),
    { initialValue: [] });
}

The imperative version manually subscribes to two streams (existingData$ and newData$) and mutates the displayedData signal in separate steps. Each data source is handled independently, which can lead to duplicated logic and harder maintenance as complexity grows.

In contrast, the declarative version merges both streams and uses scan to build up the displayedData in a single, unified expression. It avoids manual subscriptions and keeps logic in one place. This makes the code more predictable, easier to test, and less errors. Overall, the declarative approach describes what should happen, while the imperative one controls how it happens.

Would you use a state management library or a custom implementation?

The question is designed to brainstorm with the candidate. Some developers lean toward libraries like NgRx, Akita, or NGXS, while others prefer simpler, custom built state solutions using services, RxJS or signals. Both approaches are valid. What I'm really curious about is whether the candidate can justify their choice, present some trade-offs, and mention potential drawbacks of the alternative.

A senior developer should articulate decisions clearly, even when their opinion differs from their peers or contradicts the tech stack we are currently using. The provided answer will not change how I look at the candidate, I just want to see how can they argue toward one solution they see fit.

Angular Questions - General

How would you achieve a parent - child component communication ?

A simple question with a catch. Most candidates mention @Input()/@Output() bindings, or using a shared service with a Subject or a signal for communication.

// Input/Output example
@Component({ selector: 'app-child', template: ` ... ` })
export class ChildComponent<T> {
  cSelected = output<T>();
  cData = input<T[]>();
}

@Component({
  imports: [ChildComponent],
  template: `<app-child 
      [cData]="pData()" 
      (cSelected)="pSelected.set($event)" />`
})
export class ParentComponent {
  pData = signal<string[]>(['a', 'b', 'c']);
  pSelected = signal<string>('');
}
// Shared Service example
@Injectable({ providedIn: 'root' })
export class SharedService<T> {
  store = signal<T | undefined>(undefined);
}

@Component({ selector: 'app-child', template: ` ... ` })
export class ChildComponent {
  service = inject(SharedService<string[]>);

  onPushData(){
    this.service.store.set(['a', 'b', 'c']);
  }
}

@Component({ imports: [ChildComponent], template: `<app-child />` })
export class ParentComponent {
  storedData = inject(SharedService<string[]>).store
}

These are valid answers, but not enough for a senior level developer. I expect to also hear about:

  • Custom two-way binding
  • Model inputs
  • Control Value Accessor

Custom two-way binding is achieved when the child component has an input and an output with the same property name, but the output uses the Change suffix. This syntax enables the “banana-in-a-box” [(data)] binding in the template. When the child emits a value via cDataChange.emit('something'), it directly updates the parent's pData signal or property.

@Component({ selector: 'app-child', template: ` ... ` })
export class ChildComponent {
  cData = input<string>('');
  cDataChange = output<string>();
  
  onDataChange(){
    this.cDataChange.emit('something');
  }
}

@Component({ 
	imports: [ChildComponent], 
	template: `<app-child [(cData)]="pData" />`
})
export class ParentComponent {
  pData = signal('Hello World');
}

Model inputs offer syntactic sugar over manual two-way binding. Instead of defining two separate properties (@Input and @Output) and emitting values manually, you can use model() to bind once and let Angular handle the rest. The model() binding works both with signals and non signal properties passed from the parent.

A common use case is within custom form controls. This is not yet a signal form example, but so far the closest we can get. Inside a child component, we display an input (custom input, maybe with some specific functionality), and whenever the user types something, the ngModel emits the data into cData , and since it's a model(), it will once again emit the data to the parent pData.

@Component({
  selector: 'app-child',
  imports: [FormsModule],
  template: `<input [(ngModel)]="cData" />  `
})
export class ChildComponent {
  cData = model<string>('');
}

@Component({
  imports: [ChildComponent],
  template: `<app-child [(cData)]="pData" />`,
})
export class ParentComponent {
  pData = signal('Hello World');
}

Control Value Accessor (CVA) is ideal when the child component acts as a custom form control. Implementing ControlValueAccessor allows the component to integrate with Angular's forms APIs, either reactive or template-driven.

I mostly use control value accessor when creating reusable component in a UI library or when the child is something like a custom search-select component. Imagine searching for goods in an amazon like webapp, that when you type a good's prefix, it makes an API call, and then you have a dropdown of possible options.

@Component({
  selector: 'app-custom-input',
  imports: [FormsModule],
  template: `<input [ngModel]="value" (ngModelChange)="onInput($event)"/>`,
  providers: [{
    provide: NG_VALUE_ACCESSOR,
    useExisting: forwardRef(() => CustomInputComponent),
    multi: true
  }]
})
export class CustomInputComponent implements ControlValueAccessor {
  value = '';

  // callbacks for the ControlValueAccessor
  private onChange = (value: string) => {};
  private onTouched = () => {};

  // called when input changes
  onInput(value: string): void {
    this.value = value;
    this.onChange(this.value);   // propagate change
    this.onTouched();            // mark as touched
  }

  // required from ControlValueAccessor 
  writeValue(value: string): void {
    this.value = value;
  }
	
  // required from ControlValueAccessor 
  registerOnChange(fn: (value: string) => void): void {
    this.onChange = fn;
  }

  // required from ControlValueAccessor 
  registerOnTouched(fn: () => void): void {
    this.onTouched = fn;
  }
}
@Component({
  selector: 'app-parent',
  imports: [ReactiveFormsModule, CustomInputComponent],
  template: `
    <app-custom-input [formControl]="pDataControl" />
    <p>Parent value: {{ pDataControl.value }}</p>
  `
})
export class ParentComponent {
  pDataControl = new FormControl('Hello World');
}

Control Value Accessor looks more complicated and takes time to implement, especially when creating a complex custom part of the form, since it allows integrations with reactive or template driven forms.

Worth mentioning, that some candidates also bring up using viewChild() to reference the child component from parent, or using local storage or cookies to pass data, which are valid answers, but I would avoid these solutions in production.

What is the role of NgZone in Angular, and when would you opt out of Angular's change detection?

The answer on this question can really demonstrate the candidate skills and the level of projects he has work on. You rarely run code outside of Angular's change detection, you consider it when you run into performance issues.

NgZone is a wrapper around JavaScript's event loop that allows Angular to know when to trigger change detection. Angular patches async operations like setTimeout, Promise, XHR, etc. using zone.js, so when those operations complete, Angular automatically runs change detection to update the view. This occasionally leads into performance issues if you're running lots of non UI related or high frequency code (scroll, setInterval). In those cases, you can opt out of Angular's change detection using NgZone.runOutsideAngular(), and manually re-enter with NgZone.run() only if needed.

For a practical example, I include my blogpost - Simple User Event Tracker In Angular, where I setup some global listeners on button clicks, input or select changes. They do not impact UI bindings, and the code runs in the background, therefore we can run it outside Angular's change detection system. Other options where to reach it for this options include integrating with analytics, tracking, or 3rd-party scripts that are passive.

@Injectable({ providedIn: 'root' })
export class ListenerService {
  private trackerService = inject(TrackerService);
  private document = inject(DOCUMENT);
  private ngZone = inject(NgZone);

  constructor() {
    this.ngZone.runOutsideAngular(() => {
        this.document.addEventListener('change', (event) => {
            const target = event.target as HTMLElement;
            
            if (target.tagName === 'INPUT') {
                this.trackerService.createLog({
                    type: 'INPUT',
                    value: (target as HTMLInputElement).value,
                });
            }
            
            // others ....
        }, true);
    });
  }
}

What is and when to use an Injection Token ?

An InjectionToken is like a unique identifier or a name tag that Angular uses to locate and provide a specific value or service during dependency injection. You typically use new InjectionToken() when you want to provide a value that isn't a class such as a configuration object, primitive value, or interface-based dependency.

One common use case is running initialization logic at the app startup using the APP_INITIALIZER injection token token. While APP_INITIALIZER is now deprecated, the recommended replacement is the provideAppInitializer function.

bootstrapApplication(App, {
  providers: [
    provideAppInitializer(() => {
      // init languages
      // get data from cookies
      // setup sentry
      // etc ...
    }),
  ],
});

You can also define custom injection tokens, commonly used when developing Angular libraries that require configuration from the consuming app. For instance, if your library makes API call and needs to know whether it should call a production or development API, the consumer can provide this value through a token.

// code in the library 
export const API_ENDPOINT = new InjectionToken<string>('API_ENDPOINT');

// --------------

// in a different application/library
bootstrapApplication(AppComponent, {
  providers: [
    {
      provide: API_ENDPOINT ,
      useValue: '/prod/api'
    },
  ]
}

What are resolution modifiers and how to use them ?

A great explanation on this topic can be found in Decoded Frontend - Resolution Modifiers (2021). While the video is slightly dated, but the concepts remain the same. When injecting a service, Angular allows you to configure up to four resolution modifiers via the second argument to the inject() function. Below is a brief overview of each, focusing on what I typically expect from a senior candidate.

private service = inject(SomeService, {
    host: true,
    optional: true,
    self: true,
    skipSelf: true
});

Optional() is used when the provided service / injection token may or may not be provided by the developer. Example is the APP_INITIALIZER Injection token. When Angular injects this token, it uses inject(APP_INITIALIZER, {optional: true}) , since you, as a developer can, but don't have to provide some executable code when angular initiates.

Self() forces Angular to resolve the dependency only from the current injector. It won't check parent injectors. This is especially useful in directives that should only operate on the element they're attached to. An example is adding an asterisk to required input fields. You use self when injecting NgControl, so it only pulls from the target element:

@Directive({
  selector: 'input[formControlName], input[formControl]'
})
export class RequiredMarkerDirective {
    private ngControl = inject(NgControl, {
        optional: true,
        self: true
    })

    constructor() {
        if (this.ngControl?.control?.hasValidator(Validators.required)) {
        // Add red asterisk
        }
    }
}

Angular itself uses self() internally, for example in ReactiveFormsModule or FormsModule to resolve sync and async validators used on the form.

SkipSelf() is the opposite of self. It tells Angular to skip the current injector and look in the parent. This is useful when a directive or component needs to interact with a container element, like a parent form. In the example below, when using the FormControlName directive on reactive forms, it tries to resolve the parent form name for the control.

@Directive({
  selector: '[formControlName]',
  providers: [controlNameBinding],
  standalone: false,
})
export class FormControlName extends NgControl implements OnChanges, OnDestroy {
constructor(
    @Optional() @Host() @SkipSelf() parent: ControlContainer,
    // ... other injectors
 )
}

Host() modifier limits resolution to the host component or directive. It prevents Angular from searching up the hierarchy. For instance, if a directive inside FinalComponent tries to inject FormGroupDirective using @Host(), Angular will only look inside FinalComponent, not any parent components that may contain the actual form.

@Directive({
  selector: '[appHostFormDirective]',
})
export class HostFormDirective {
    private formGroup = inject(FormGroupDirective, { host: true })
    
    constructor() {
        console.log('FormGroupDirective found:', formGroup);
    }
}
@Component({
  selector: 'app-final',
  template: `
    <form [formGroup]="form">
       <input [formControlName]="'name'" appHostFormDirective />
    </form>
  `,
  imports: [ReactiveFormsModule, HostFormDirective],
})
export class FinalComponent {
  form = new FormGroup({
    name: new FormControl<string | null>(null),
  });
}

I rarely use these modifiers in day-to-day application development. They tend to become more relevant when building libraries or advanced directives. However, Angular itself uses them extensively, and reviewing its source code is a great way to see them applied effectively.

Why would you use a track function in a for-loop and how it works ?

The track function is a useful performance optimization that was often overlooked with the old *ngFor="let item of items" syntax. Fortunately, the new control flow @for() now requires you to specify a track function, which encourages better practices.

So, why is it important? Imagine you have a component that makes an API call to fetch a list of items, list of users, and displays them in the template. You also have a “reload” button to refetch this data (in case something has changed on the backend). Below is an example using the older *ngFor syntax to illustrate the issue:

@Component({
  selector: 'app-child',
  imports: [NgForOf],
  template: ` 
    <button (click)="onRerun()">re run</button>

    <div *ngFor="let item of items()">
        {{item.name}}
    </div>
`
})
export class ChildComponent {
  items = signal<{ id: string; name: string }[]>([]);

  onRerun() {
    // "fake api call" to reload data
    this.items.set([{id: '100', name: 'Item 1'}, /* ... */ ]);
  }
}

In this setup, every time onRerun() is triggered and the array is updated (even with the same content), Angular will re-render all elements in the DOM. That's because it can't tell which items stayed the same and why didn't. It result to performance loss and UI flickering, especially in long or complex lists. To prevent this, you use a trackBy function:

@Component({
  selector: 'app-child',
  imports: [CommonModule],
  template: ` 
    <ng-container *ngFor="let item of items(); trackBy: identify">
        <!-- previous code -->
    </ng-container>
`
})
export class ChildComponent {
  // ... previous code
  
  identify(index: number, item: { id: string }): string | number {
    return item.id;
  }
}

This tells Angular how to uniquely identify items in the array, commonly via an id. With a trackBy function (or track key in @for()), Angular can associate each item with its corresponding DOM element. When the data is reloaded, Angular compares these keys (not full object references), allowing unchanged items to be preserved in the DOM.

Why does this matter? Because DOM operations are expensive. Without proper tracking, Angular discards and recreates DOM elements for every item, even if the data hasn't changed. With tracking, the DOM elements stay in place, and Angular only updates bindings when necessary.

On the GIF below, the top list uses trackBy: identify while the second one does not. You can see the difference. The top list preserves DOM elements during data reload, whereas the second recreates them entirely each time.

Senior Angular Interview Questions
NgFor retrigger without trackBy

With the new @for() syntax, Angular enforces the use of a track key for the same purpose. However, two common mistakes still happen:

  • Using the object itself as the key - example: @for (item of items(); track item). This does not work as expected because the reference to each item changes on every reload, even if the data is identical and it will re-render the UI every time, basically ignoring the track function.
  • Using $index as the key - example: @for (item of items(); track $index). This causes problems when an item is removed. Suppose you delete the 5th item in a list of 10, then every item after index 4 now has a new index, forcing Angular to re-render them all unnecessarily. In stateful components like forms, this leads to loss of input focus or cursor position, however using the $index is okay for static lists.

Here's a comparison: the top row uses track item.id, and the second uses track $index. Watch how the first preserves DOM elements during removal. Here is a stackblitz example to play with.

Senior Angular Interview Questions
For loop using index for trackBy

What is the the difference between providers and viewProviders ? Can you provide an example when to use either of them ?

A great write-up on this topic is by Paweł Kubiak in his article Hidden Parts of Angular: View Providers. Below is a summary of his explanation, followed by a practical example.

“When you use providers, the service is available to the component itself, its template, any child components, and even to content projected into it using <ng-content>.

On the other hand, viewProviders limit the service's visibility strictly to the component's view. That means it's accessible only to the component and the elements declared directly in its template—but not to projected content or external child components.”

In most applications I've worked on, using providers or viewProviders were rare use-cases. Where I've seen this being showcased the most are examples with NGRX and generating dynamic components with configurable dependencies.

Let's take an example from a flight booking portal. On the final payment step you present two payment options - Stripe (default) and PayPal payments, allowing users to choose. Each option has a different implementation, but both rely on a common PaymentService abstraction:

export abstract class PaymentService {
  abstract pay(): void;
}

@Injectable()
export class StripeService implements PaymentService {
  pay() { console.log('Paid with Stripe!'); }
}

@Injectable()
export class PaypalService implements PaymentService {
  pay() { console.log('Paid with PayPal!'); }
}

@Component({
  selector: 'app-payment-button',
  template: `<button (click)="handlePayment()">Pay</button>`,
})
export class PaymentButtonComponent {
  private paymentService = inject(PaymentService);

  handlePayment() {
    this.paymentService.pay();
  }
}

Of course in real life you would need to establish a connection with the payment provider, handle errors, etc. The PaymentButtonComponent button is using the abstract PaymentService, which means, we need to provide an instance of either the Paypal or Stripe service. To dynamically decide which implementation to use based on user selection, you can manually create and inject the appropriate provider. This example demonstrates destroying and re-instantiating the component with a different PaymentService provider each time:

@Component({
  imports: [FormsModule],
  template: ` 
    <label>
      <input type="checkbox" [(ngModel)]="usePaypal" /> Use PayPal
    </label>

    <ng-template #container />
  `
})
export class TestComponent {
  readonly usePaypal = signal(false);

  readonly container = viewChild('container', {
    read: ViewContainerRef
  });

  constructor() {
    // init payment button
    effect(() => {
      const container = this.container();
      const usePaypal = this.usePaypal();

      untracked(() => {
        if (container) {
          this.loadComponent(container, usePaypal);
        }
      });
    });
  }

  loadComponent(vcr: ViewContainerRef, usePaypal: boolean) {
    // remove previous
    vcr.clear();

    const injector = Injector.create({
      providers: [
        {
          provide: PaymentService,
          useClass: usePaypal ? PaypalService : StripeService
        }
      ]
    });

    // attach component to DOM
    vcr.createComponent(PaymentButtonComponent, { injector });
  }
}

This example shows how providers can be dynamically configured depending on runtime logic. We don't use providedIn: 'root' here, even if our services were globally provided, because using Injector.create() always results in new instances, overriding any singleton behavior.

Even if the candidate doesn't know the exact difference, that's still okay, but I would expect at least one example where they encountered a situation that a global service wasn't enough, they needed to use a provider to create multiple instances for whatever reason.

Why pipes are considered safe in a template, but regular function calls (not signals) are not ?

Pure pipes are only reevaluated when their input values change, which makes them efficient and safe to use in templates. On the other hand, function calls in templates are executed on every change detection cycle.

So while {{ name | uppercase }} in the template is safe, {{ someHeavyFunction() }} will be called many times per second, which is rarely what you want. As Angular docs say: “by default all pipes are considered pure, which means that it only executes when a primitive input value is changed.

Here I do a shameless plug including my article where I dove deeper into how the Implementation of Angular Pipes works. Under the hood, pipes create a caching object, and for a specific input, they perform the pipe's logic and store the result in the cache. Then, when change detection runs again with the same input, the pipe first checks the cache. If the output was already computed, it simply returns the cached value, making it an O(1) operation.

Angular Questions - Signals

Signals in Angular were first introduced in May 2023 with version 16, and there was quite a bit of buzz even before their official release. What fascinates me is when a candidate says, “Yeah, signals are here, but we worked on an older project and never migrated, so I never had a chance to try them out.” That's a red flag… what else can I say? As a senior developer, you're expected to have an understanding of newer features and how they work, even if you haven't used them in production yet.

How would you convince your team to migrate a project from Observables to signals ?

This question tells me two things. First, whether the person has a solid understanding of signals, and second, whether they've ever initiated a larger tech debt refactor on a project. I believe a senior developer should actively drive technical improvements and come forward with such initiatives. One solid answer might be something like:

“Angular, and overall the whole frontend ecosystem, is clearly moving toward signals. There's even a TC39 proposal to support signals natively in JavaScript. Most of the new Angular APIs, like the Resource API, are designed to work with signals. Signals also simplify state management, since you can both listen to changes and synchronously read their current value.”

Can you explain the diamond problem in Observables and why it doesn't occur with signals?

So far, this question has had a very high failure rate, but I like to see how candidates react to a topic they've likely never encountered in an Angular interview.

I first came across the diamond problem in the article from Mike Pearson - I changed my mind. Angular needs a reactive primitive. He argues why RxJS, even if loved, may not be the safest choice for Angular's long term and why SolidJS chose signals.

Mike talks about the diamond problem and the following example is heavily inspired from his blogpost. We are specifically curious about the behavior of the combineLatest operator.

Let's say we have an effect that listens to two signals. Signals are synchronous and batched, meaning that if both signals are updated one after another, the effect will still run only once. However, if we use combineLatest, it emits every time a dependency changes, resulting in multiple emissions even for the same update cycle.

export class TestComponent {
  prop1 = signal('a');
  prop2 = signal('b');

  constructor() {
    effect(() => {
      const prop1 = this.prop1();
      const prop2 = this.prop2();

      console.log(`Signal: ${prop1} - ${prop2}`);
    });

    combineLatest([
	    toObservable(this.prop1), 
	    toObservable(this.prop2)]
	  ).subscribe(([p1, p2]) => console.log(`Observable: ${[p1} - ${p2}`));
    
    setTimeout(() => {
      this.prop1.set('one');
      this.prop2.set('two');
    }, 1000);
  }
Senior Angular Interview Questions
Diamond Problem - RxJS vs Signals

In the console output, you'll see:

  • The effect logs only once, after both values are updated.
  • The combineLatest logs twice, once for each individual update.

This is a concrete example of the diamond problem - duplicated or excessive emissions due to shared dependencies in a reactive graph. Signals avoid this problem thanks to their synchronous and batched behavior.

I know this may be more of a “gotcha” question, so you could rephrase the question into: “Why can Observables like combineLatest lead to unnecessary emissions, and how do signals prevent that?

When to use effect and untracked in a signal based application ?

Angular docs has a section use cases for effects, that talks about when effects should be used. Based on that I expect a response from the candidate something like:

"Use effect when you have no other alternative. For example when you need to rely on a reactive value and the other end isn't reactive. Use cases may include DOM API synchronizations, sending data into analytics, communication with a non-reactive library"

It's also important that the candidate has an understanding why the untracked function is needed when we want to remove dependency tracking in an effect. Problems I personally encountered many times were that an effect was reading multiple signals, but also modifying some, therefore it created an infinite cycle and was running all the time. Personally, I use untracked most of the time, leaving only the dependency signals outside of it. In the following example I want to focus on the input element when the button is clicked. I use afterRenderEffect which works similarly as effect, with the key difference being that it runs after the application has completed rendering.

@Component({
  selector: 'app-focus-example',
  imports: [FormsModule],
  changeDetection: ChangeDetectionStrategy.OnPush,
  template: `
    <button (click)="editMode.set(!editMode())">
      {{ editMode() ? 'Exit' : 'Enter' }} Edit Mode
    </button>

   <input #editInput [(ngModel)]="value" [disabled]="!editMode()" />
  `
})
export class FocusExampleComponent {
  editMode = signal(false);
  value = signal('Initial value');
  editInput = viewChild('editInput', { read: ElementRef });

  constructor() {
    afterRenderEffect(() => {
	    const editMode = this.editMode();
    
	    untracked(() => {
	      if (editMode ) {
	        // read the element reference once, without tracking it
	        const inputRef = this.editInput();
	        // defer the focus() until after the DOM is updated
	        setTimeout(() => {
		        inputRef?.nativeElement?.focus();
	        })
	      }
	    })
    });
  }
}

Are life-cycle hooks still needed in a fully signal based application ?

This question is great for brainstorming with a candidate, whether he understands these hooks and also how signals work. Based on my experience, many life-cycle hooks can be replaced by signals and reactive primitives:

  • NgOnInit (NO) - Mostly replaceable with constructor or effect(). This hook is traditionally used for initialization logic that depends on resolved inputs, data fetching, or setting up observers. For simpler logic, constructor() suffices, while more complex reactive scenarios are better handled with effect().
  • NgOnChange (NO) - Can be replaced with computed() or effect(), as these can react to changes in input() signal dependencies.
  • NgAfterViewInit (NO) - Replaceable with effect() to perform updates on DOM elements, using viewChild() signal references as dependencies.
  • NgAfterContentInit (NO) - Similar to NgAfterViewInit, effect() can handle initialization logic based on contentChild() signal references or you can use the afterNextRender callback.
  • NgAfterContentChecked / NgAfterViewchecked (NO) - are called after every change detection cycle, which makes them performance sensitive. You can replace them with afterRenderEffect which runs after the view has been rendered and a signal dependency changed.
  • NgOnDestroy (NO) - For cleanup tasks such as unsubscribing from third-party libraries, clearing intervals, or other manual teardown logic that signals don't automatically handle you can inject DestroyRef and listen on onDestroy for this purpose.

Angular Questions - RxJS

Even in a fully signal based application, there are still use cases where RxJS is a better alternative. From my experience, nearly everything can be implemented by signals, but RxJS sometimes offers a more declarative or composable approach, especially when dealing with complex async workflows therefore I open a debate about some RxJS topics.

What is a higher-order observable and how they differ ?

For an indepth reading about this topic I include a blogpost that I've made in the past Angular Interview: What is a Higher-Order Observable?. A good example is the classic search box use case, where on each keystroke, an API request is made. The candidate should be able to explain how the behavior changes depending on which higher-order observable operator is used.

export class TestComponent {
  private readonly http = inject(HttpClient);
  readonly control = new FormControl<string>('');
  
  search$ = this.control.valueChanges.pipe(
    // switchMap, concatMap, mergeMap, exhaustMap
    switchMap((val) => this.http.get('...', {
      val: val
    }))
   )
 }
  • switchMap - cancels any ongoing request when a new value is typed. Ideal for search boxes, only the latest input matters.
  • mergeMap - triggers all requests in parallel. Every keystroke results in a request, regardless of timing. Good for logging, but not ideal for searches.
  • concatMap - queues each request and processes them sequentially, preserving order. Better for form submission flows, not live search.
  • exhaustMap - ignores new values while a request is in progress. Useful to prevent duplicate requests (e.g., during button mashing), but bad for fast-typing search boxes. If you don't use the abortSignal for the resource API, it works as exhaustMap.

What is the difference between share() and shareReplay() ?

Too much of a theoretical question? Not at all. In a legacy project, which still mainly relies on Observables, you may have situations where you use one of them to multicast values to subscribers. There are, however, occasional bugs, for example, when you navigate back and forth between pages. The next time you come back, you no longer have the current value, or you just retriggered logic that should have been cached by the shareReplay() operator. Or, you just ignore both and use BehaviorSubject.

Both share() and shareReplay() are RxJS multicasting operators. They allow multiple subscribers to share the same source observable, preventing duplicated side effects (like HTTP requests).

  • Use share() when you only want future subscribers to receive emissions. It doesn't retain or replay past values. Essentially, it converts a cold observable into a hot one.
  • Use shareReplay() when you want new subscribers to immediately receive the latest emitted value(s). It's useful for caching scenarios where re-executing the source (e.g., an HTTP request) is costly or undesirable.

You can configure shareReplay() using options like:

  • bufferSize – The number of previous values to remember and replay to new subscribers. Typically set to 1 for simple caching.
  • refCount – When true, the observable automatically unsubscribes from the source when there are no subscribers. When false, it stays connected indefinitely (useful for shared streams).

What does this code do ? - scan() + expand()

Both scan() and expand() are rarely used in everyday Angular development. However, their presence can indicate that a candidate has encountered more complex problems, problems that go beyond the usual use of map, filter, or take operators. I like to show a practical example like this:

  private paginationOffset$ = new Subject<number>();
   
  loadedMessages = toSignal(this.paginationOffset$.pipe(
    startWith(0),
    exhaustMap((offset) =>
      this.api.getMessages(offset).pipe(
        expand((_, i) => 
	        (i < 2 ? this.api.getMessages(offset + 20) : EMPTY)
	    ),
        map((data) => ({ data })),
        catchError((err) => of({ data: [] })),
        startWith({ data: [] }),
      ),
    ),
    scan(
      (acc, curr) => ({ data: [...acc.data, ...curr.data] }),
      { data: [] as MessageChat[] },
    ),
  ), {initialValue: [] });
  
  nextScroll() {
    this.paginationOffset$.next(this.loadedMessages().data.length);
  }

The code above represents a recursive API based pagination pattern. Every time the user triggers nextScroll() (for example, by clicking a "Load More" button), the number of already loaded messages is emitted into the paginationOffset$ subject. Inside the loadedMessages signal:

  • exhaustMap ignores new emissions until the current inner observable completes. The user is unable to load more data until the first batch completes.
  • expand is used to recursively call the API multiple times. We assume each call returns 20 messages. Using expand, we can simulate loading three pages in one shot (initial call + two recursive calls).
  • scan accumulates all the loaded messages into one stream without losing previously fetched ones.
  • catchError ensures that any failed API call doesn't break the chain.
  • startWith ensures the stream emits an initial empty state, avoiding undefined references.

Summary

Other notable questions may be:

  • Describe a time you had to refactor legacy code in Angular, how did you approach it?
  • How do you handle code scalability and performance in large Angular apps?
  • What is OnPush change detection and when would you use it?
  • What's the difference between combineLatest, withLatestFrom, and forkJoin and how would you decide which to use?
  • What is you approach to testing, what mocking library do you use?
  • How would you migrate an existing app to standalone components and Signals gradually?
  • What is hydration, how you enable it, why is it needed?

Overall, these are the types of questions I lean toward. However, the most important one we always ask at the end is:

“Can you tell us about some more complex feature you have worked on in the past 1–2 years? What was the problem, and how did you solve it?”

The candidate may be missing some Angular specific knowledge, but they may have faced challenging problems, possibly even ones we're currently facing. Prior experience solving real problems often outweighs deep framework trivia, which can be learned over time.

Ultimately, it depends on your company's needs. Do you need a strong Angular expert who can refactor and migrate legacy code while minimizing future tech debt? Or are you looking for someone to join a broader team, where they'll grow with support from others? Decide for yourself.

Feel free to share your thoughts, catch more of my articles on dev.to, connect with me on LinkedIn or check my Personal Website.


Senior Angular Interview Questions
]]>
<![CDATA[6 CSS Snippets Every Front-End Developer Should Know In 2025]]>2025; I think every front-end developer should know how to enable page transitions, transition a <dialog>, popover, and <details>, animate light n' dark gradient text, type safe their CSS system, and add springy easing to animation.

AI is not going to give you this CSS.

]]>
https://www.angularspace.com/6-css-snippets-every-front-end-developer-should-know-in-2025/678f2ed432df9800013f2340Fri, 25 Jul 2025 15:48:36 GMT

2025; I think every front-end developer should know how to enable page transitions, transition a <dialog>, popover, and <details>, animate light n' dark gradient text, type safe their CSS system, and add springy easing to animation.

AI is not going to give you this CSS.

This post is a theme continuation; checkout previous years 2023 and 2024 where I shared snippets for those years.

This year, the snippets are bigger, more powerful, and leverage progressive enhancement a bit more; to help us step up to the vast UI/UX requirements of 2025.

Springy easing with linear()

6 CSS Snippets Every Front-End Developer Should Know In 2025

Sprinkle life into animations with natural looking spring and bounce easings using linear().

0:00
/0:23

Using a series of linear lines to make "curves", you can create surprisingly realistic visual physics. A small amount can go a long way to adding interest and intrigue to the user experience.

In the following video, the top animation uses ease-out and the bottom uses linear(), and I think the results are quite different, the bottom being more desirable.

0:00
/0:18

Here's some typical linear() easing CSS 😅:

.springy {
  transition: transform 1s
    linear(
      0,
      0.009,
      0.035 2.1%,
      0.141,
      0.281 6.7%,
      0.723 12.9%,
      0.938 16.7%,
      1.017,
      1.077,
      1.121,
      1.149 24.3%,
      1.159,
      1.163,
      1.161,
      1.154 29.9%,
      1.129 32.8%,
      1.051 39.6%,
      1.017 43.1%,
      0.991,
      0.977 51%,
      0.974 53.8%,
      0.975 57.1%,
      0.997 69.8%,
      1.003 76.9%,
      1.004 83.8%,
      1
    );
}

Yes… that's what your formatter will do to it, as if it's helpful in some way lol.

The linear() code above is not very human readable, but the machines love it. No frets, there's a few generators out there:

Tip! 💡

Expect longer durations when using linear(). When things run long, it can be nice to make them seamlessly interruptible, making linear() a great fit for transitions and potentially troublesome as keyframes.

You could alternatively use premade CSS variables from a library like Open Props:

@import "<https://unpkg.com/open-props/easings.min.css>";

.springy {
  @media (prefers-reduced-motion: no-preference) {
    transition: transform 1s var(--ease-spring-3);
  }
}

Easy CSS to read, comes with 5 strengths for common effects:

0:00
/0:01

Tip💡

Try the Open Props Springs notebook! <--

Incrementally adopt

This one is super easy to toss in today.

Easiest way, use the cascade (if you're into that):

@media (prefers-reduced-motion: no-preference) {
  /* just repeat the shorthand with adjusted easing */
  .thingy {
    transition: transform 0.3s ease;
    transition: transform 0.3s linear(…);
  }

  /* or, target a specific property */
  .thingy {
    animation-timing-function: var(--ease-1);
    animation-timing-function: var(--ease-spring-2);
  }
}

If it knows, it knows.

Or, test for it first and scalpel apply the upgrade:

.thingy {
  transition: transform 0.3s ease;

  @supports (transition-timing-function: linear(0, 0.1, 1)) {
    transition-timing-function: var(--ease-spring-2);
  }
}

Typed custom properties

6 CSS Snippets Every Front-End Developer Should Know In 2025

Similar to JS variables defined with var, CSS variables defined with -- are global, loose, dynamic and flexible. This is great.

But… there are times, like when building a system, that you want to limit what goes into variables so a system can run with a reasonable amount of reliability.

0:00
/0:35

In the above video, a variable is set to an invalid color. At first, this breaks the system. But, @property is added, the system remained functioning with the latest known valid color value.

Create a typed <color> CSS variable like this:

@property --color-1 {
  syntax: "<color>";
  inherits: false;
  initial-value: rebeccapurple;
}

In addition to type safety, @property defined variables can also be animated because the browser can infer the steps needed to interpolate the value change based on the assigned type.

Before @property, the browser couldn’t derive a type and discover interpolation steps, it was too complicated. Now, you give the browser a hint, and it's simple.

In 2025, y'all front-end devs should be getting familiar with defining variables with @property because it:

  1. Formalizes CSS system interfaces
  2. Protects CSS systems
  3. Enables new animation powers
  4. Can perform better when using inherits: false

Resources

View transitions for page navigation

Y'all should know how to crossfade pages when links are clicked with this tiny view transitions snippet:

@view-transition {
  navigation: auto;
}

0:00
/0:21

https://codepen.io/argyleink/project/full/DezgjV

This is the easiest snippet to add to your site, with no downsides.

It signals your website would like to use page transitions when links are clicked, and the default transition is a crossfade.

If the browser doesn't support it, it continues as it always has; but if it does support it then you dab the page with some special sauce.

There's plenty more customization you can do, like full page animations. But the gist of this section is just to share that easy snippet and the way the feature can be progressively enhanced.

Incrementally adopt

There's many more opportunities to add additional animations with the page transition.

A great place to start enhancing this page transition experience is to identify elements commonly found across pages, and give them a name:

.nav {
  view-transition-name: --nav;
}

.sidenav {
  view-transition-name: --sidenav;
}

This includes elements in the page transition.

They can even be different elements.

a, h1 {
  view-transition-name: --morphy;
}

By giving an <a> and an <h1> the same view-transition-name on two different pages, the browser will move and resize the page 1 element to the location and size of the page 2 element, making it look like a morph. You can of course morph between the same elements also.

0:00
/0:18

https://codepen.io/argyleink/project/full/AbvgrM

There is so much more. Continue giving elements names and studying the rad examples by Bramus, and you can create experiences with motion like this:

0:00
/0:37

https://view-transitions.chrome.dev/off-the-beaten-path/mpa/

I love the DevTools for Animations, scrubbing that full page view transition is very satisfying, and excellent for really inspecting and improving the little details.

Resources

Transition animation for <dialog> and [popover]

In 2025, knowing your way around a <dialog> and a [popover] are table stakes. Otherwise, everyone else will be on top of you, and your wack z-index attempts will be defeated with a puny value of 1.

These are common UI elements, with no JavaScript to download, and accessibility built in. Use em and know the differences.

6 CSS Snippets Every Front-End Developer Should Know In 2025

These two elements are projected into a layer above all the other UI called the top layer. The browser projects the elements from anywhere in the document, to the top when shown.

To transition this, there’s a few new CSS properties, for the full interruptible CSS transition user experience — transition-behavior, the @starting-style rule, and overlay.

6 CSS Snippets Every Front-End Developer Should Know In 2025

Combining these can feel like an incantation, but that makes it great copy and paste. So here! Use the following snippet to enable cross fade transitions for both <dialog> and popover: Try it

/* enable transitions, allow-discrete, define timing */
[popover], dialog, ::backdrop {
  transition: display 1s allow-discrete, overlay 1s allow-discrete, opacity 1s;
  opacity: 0;
}

/* ON STAGE */
:popover-open,
:popover-open::backdrop,
[open],
[open]::backdrop {
  opacity: 1;
}

/* OFF STAGE */
/* starting-style for pre-positioning (enter stage from here) */
@starting-style {
  :popover-open,
  :popover-open::backdrop,
  [open],
  [open]::backdrop {
    opacity: 0;
  }
}

While this code is effective and terse, it's often not enough customization if you want to present and dismiss dialogs differently than you do popovers. Or make the entry animation different then the exit.

Transition a dialog

Here's a <dialog> element with this barebones snippet applied. They look pretty terrible out of the box, but you can do amazing things with them.

0:00
/0:07

To get started, a <dialog> element needs to be in the HTML. A <dialog> should be shown and hidden with buttons, one to show it can be in the page and one to close it should be inside the dialog.

<button onclick="demo.showModal()">…</button>

<dialog id="demo">
  <header>
    <button title="Close" onclick="demo.close()">close</button>
  </header>
</dialog>

You can enable light dismiss on a dialog and skip the close button like <dialog closedby="any">

Tip💡

You can enable light dismiss on a dialog and skip the close button like <dialog closedby="any">

To animate the dialog transition:

  1. Two parts need animation: the <dialog> itself and its ::backdrop.
  2. When a dialog is shown, display: none is changed to display: block and transition-behavior enables timing this change with our animation.
  3. When a dialog is shown, it is cloned into the top layer, this also needs to be timed with our animation.
  4. The [open] attribute is used to know when the dialog is open or closed. @starting-style is used during the first render as starting styles.
dialog {
  /* Exit Stage To */
  transform: translateY(-20px);

  &, &::backdrop {
    transition:
      display 1s allow-discrete,
      overlay 1s allow-discrete,
      opacity 1s ease,
      transform 1s ease;

    /* Exit Stage To */
    opacity: 0;
  }

  /* On Stage */
  &[open] {
    opacity: 1;
    transform: translateY(0px);

    &::backdrop {
      opacity: 0.8;
    }
  }

  /* Enter Stage From */
  @starting-style {
    &[open],
    &[open]::backdrop {
      opacity: 0;
    }

    &[open] {
      transform: translateY(20px);
    }
  }
}

With this snippet as a starting point, you can find three popular dialog experiences for you to take code or inspiration from have-a-dialog.

The following video shows the excellent keyboard experience. It also demonstrates the interruptible nature of a CSS transition, so a user can close it anytime they want and always see a smooth interface.

0:00
/0:37

nerdy.dev/have-a-dialog

Resources

Transition a popover

Like a <dialog> element, a popover appears over everything else in the top layer. Light dismiss is the default, and keyboard / focus management is all done for you.

0:00
/0:09

Let's bulid it.

There's an HTML aspect to implementing the UX:

<button popovertarget="pop">?</button>

<p id="pop" popover>An overlay with additional information.</p>

Also, like a <dialog> element, to animate the transition of a popover's display property and insertion into the top layer, you need to combine transition-behavior and @starting-style.

Notice with a popover, the open state isn't an attribute, it's a css pseudo-class :popover-open.

[popover] {
  &, &::backdrop {
    transition:
      display 0.5s allow-discrete,
      overlay 0.5s allow-discrete,
      opacity 0.5s,
      transform 0.5s;

    /* Exit Stage To */
    opacity: 0;
  }

  /* On Stage */
  &:popover-open {
    opacity: 1;

    &::backdrop {
      opacity: 0.5;
    }
  }

  /* Enter Stage From */
  @starting-style {
    &:popover-open,
    &:popover-open::backdrop {
      opacity: 0;
    }

    &:popover-open {
      transform: translateY(10px);
    }
  }
}

Resources

Transition animation for <details>

6 CSS Snippets Every Front-End Developer Should Know In 2025

Found on the CSS Wrapped 2024 website in the desktop layout.

0:00
/0:06

The disclosure element (<details>) has been waiting for CSS primitives to unlock its animation potential for many years.

<details>
  <summary>Show disclosed content</summary>
  <p>…</p>
</details>

The details element needs to transition to height: auto from height: 0px and a way to target the slotted content it internally uses for the disclosure. The new interpolate-size feature can be used for the height animation and ::details-content for the selector.

details {
  inline-size: 50ch;

  @media (prefers-reduced-motion: no-preference) {
    interpolate-size: allow-keywords;
  }

  &::details-content {
    opacity: 0;
    block-size: 0;
    overflow-y: clip;
    transition: content-visibility 1s allow-discrete, opacity 1s, block-size 1s;
  }

  &[open]::details-content {
    opacity: 1;
    block-size: auto;
  }
}

Resources

Bonus attribute

6 CSS Snippets Every Front-End Developer Should Know In 2025

If you want to connect two or more details elements and have them close each other respectively, you can accomplish this with a shared name attribute on each detail element you want to be connected. Very much like a radio group.

0:00
/0:13

https://developer.chrome.com/blog/styling-details

<details name="linked-disclosure">
  <summary>Show disclosed content</summary>
  <p>…</p>
</details>

<!-- name="linked-disclosure" connects these together -->

<details name="linked-disclosure>
  <summary>Show disclosed content</summary>
  <p>…</p>
</details>

Animated adaptive gradient text

A bold headline in a design is often complimented with a gradient, helping draw the eye with intrigue and vividness.

Since 2015 the web has been able to create gradient text effects, and in the past 10 years, there have been some updates and enhancements: animation, user preferences and interpolation.

  1. Adapting the gradient text effect to light and dark themes is easy with the prefers-color-scheme query
  2. New animation updates enable gradient effects to go beyond spinning or moving gradient images around, but to change colors over time
  3. New interpolation updates allow those mixes to be more vivid, rich, and interesting
0:00
/0:16
@property --color-1 {
  syntax: "<color>";
  inherits: false;
  initial-value: #000;
}

@property --color-2 {
  syntax: "<color>";
  inherits: false;
  initial-value: #000;
}

@keyframes color-change {
  to {
    --color-1: var(--_color-1-to);
    --color-2: var(--_color-2-to);
  }
}

.gradient-text {
  --_space: ;

  /* light mode */
  --_color-1-from: yellow;
  --_color-1-to: orange;
  --_color-2-from: purple;
  --_color-2-to: hotpink;

  /* dark mode */
  @media (prefers-color-scheme: dark) {
    --_color-1-from: lime;
    --_color-1-to: cyan;
    --_color-2-from: cyan;
    --_color-2-to: deeppink;
  }

  --color-1: var(--_color-1-from);
  --color-2: var(--_color-2-from);

  animation: color-change 2s linear infinite alternate;

  background: linear-gradient(
    to right var(--_space),
    var(--color-1),
    var(--color-2)
  );

  /* old browser support */
  -webkit-background-clip: text;
  -webkit-text-fill-color: transparent;

  /* modern browser version */
  background-clip: text;
  color: transparent;

  @supports (background: linear-gradient(in oklch, #fff, #fff)) {
    --_space: in oklch;
  }
}

That's quite a snippet 😅 How did it get to that?

Most developers making a gradient text effect are starting here:

.gradient-text {
  background: linear-gradient(to right, hotpink, cyan);
  -webkit-background-clip: text;
  -webkit-text-fill-color: transparent;
}

remove the prefixes

The first update or enhancement is to remove the prefixes. Although, so older browsers continue to support the effect, add the unprefixed values after:

.gradient-text {
  background: linear-gradient(to right, hotpink, cyan);

  /* old browser support */
  -webkit-background-clip: text;
  -webkit-text-fill-color: transparent;

  /* modern browser version */
  background-clip: text;
  color: transparent;
}

Use updated gradient interpolation spaces

Next, improve the quality of the gradient by progressively enhancing the in interpolation syntax with CSS variables and @supports.

You could alternatively repeat the gradient definition and include in oklch in it, which would also work great and support older browsers.

.gradient-text {
  --_space: ;

  background: linear-gradient(to right var(--_space), hotpink, cyan);

  /* old browser support */
  -webkit-background-clip: text;
  -webkit-text-fill-color: transparent;

  /* modern browser version */
  background-clip: text;
  color: transparent;

  @supports (background: linear-gradient(in oklch, #fff, #fff)) {
    --_space: in oklch;
  }
}

Create typed <color> properties

For the gradient color animation use @property, like described in snippet #4. The typed color values can be animated inside of a gradient, like a gradient used with text.

@property --color-1 {
  syntax: "<color>";
  inherits: false;
  initial-value: #000;
}

@property --color-2 {
  syntax: "<color>";
  inherits: false;
  initial-value: #000;
}

Now --color-1 can be animated like transition: –color-1 .3s ease or used in keyframes. These values that can animate, can be used anywhere color is allowed, like in a gradient text effect.

@keyframes color-change {
  to {
    --color-1: lime --color-2: orange;
  }
}

.gradient-text {
  animation: color-change 2s linear infinite alternate;
}

Make a few props, Swap em' in a dark MQ

To keep things declarative, I've also defined color variables to hold the colors for animation.

.gradient-text {
  --_color-1-from: yellow;
  --_color-1-to: orange;
  --_color-2-from: purple;
  --_color-2-to: hotpink;

  @media (prefers-color-scheme: dark) {
    --_color-1-from: lime;
    --_color-1-to: cyan;
    --_color-2-from: cyan;
    --_color-2-to: deeppink;
  }

  /* set our typed variables to the "from" values */
  --color-1: var(--_color-1-from);
  --color-2: var(--_color-2-from);
}

Put all those moments and reasons together, and we have arrived at the final snippet:

@property --color-1 {
  syntax: "<color>";
  inherits: false;
  initial-value: #000;
}

@property --color-2 {
  syntax: "<color>";
  inherits: false;
  initial-value: #000;
}

@keyframes color-change {
  to {
    --color-1: var(--_color-1-to);
    --color-2: var(--_color-2-to);
  }
}

.gradient-text {
  --_space: ;

  /* light mode */
  --_color-1-from: yellow;
  --_color-1-to: orange;
  --_color-2-from: purple;
  --_color-2-to: hotpink;

  /* dark mode */
  @media (prefers-color-scheme: dark) {
    --_color-1-from: lime;
    --_color-1-to: cyan;
    --_color-2-from: cyan;
    --_color-2-to: deeppink;
  }

  --color-1: var(--_color-1-from);
  --color-2: var(--_color-2-from);

  animation: color-change 2s linear infinite alternate;

  background: linear-gradient(
    to right var(--_space),
    var(--color-1),
    var(--color-2)
  );

  /* old browser support */
  -webkit-background-clip: text;
  -webkit-text-fill-color: transparent;

  /* modern browser version */
  background-clip: text;
  color: transparent;

  @supports (background: linear-gradient(in oklch, #fff, #fff)) {
    --_space: in oklch;
  }
}

Some years these snippets are short and sweet, but not this year. Watch out for next year's article, who knows what you'll need to know!


6 CSS Snippets Every Front-End Developer Should Know In 2025
]]>
<![CDATA[New Angular Space Advisor!]]>Angular Space vision is growing bigger everyday.
With aim to create truly unique experience.


To make sure I can provide the highest quality going forward I decided to bring new Advisor to join our ranks!


Welcome Gregor Ojstersek!



Gregor Ojstersek is a CTO who founded Engineering Leadership newsletter (150k+ subscribers)

]]>
https://www.angularspace.com/new-angular-space-advisor/687921a908c66f00012ef916Fri, 18 Jul 2025 09:21:44 GMT

Angular Space vision is growing bigger everyday.
With aim to create truly unique experience.


To make sure I can provide the highest quality going forward I decided to bring new Advisor to join our ranks!


Welcome Gregor Ojstersek!


New Angular Space Advisor!


Gregor Ojstersek is a CTO who founded Engineering Leadership newsletter (150k+ subscribers).

We already had 2x Giveaways of his amazing workshops that are teaching a much needed mindset switch for everyone aspiring to be a leader & existing leaders on how to improve in the role.

His expert insights and ability to execute are going to bring me much needed help crafting Angular Space going forward.

Advisors are here to help me decide how to proceed with Angular Space growth & development.

A Group of friendly people to brainstorm ideas with.

]]>
<![CDATA[Certificates.dev Review: Mid-level Angular Developer]]>Although Angular is regarded as an "enterprise framework," developers often struggle to prove their knowledge without practical tasks. Of course, there's the "Google Developer Experts" program, but some of its requirements might deter potential experts, and it doesn’t focus solely on Angular

]]>
https://www.angularspace.com/certificates-dev-review-mid-level-angular-developer/6811cd7f7a82f20001a671e9Wed, 16 Jul 2025 12:57:15 GMT

Although Angular is regarded as an "enterprise framework," developers often struggle to prove their knowledge without practical tasks. Of course, there's the "Google Developer Experts" program, but some of its requirements might deter potential experts, and it doesn’t focus solely on Angular - Instead, it promotes Google technologies such as cloud solutions, Firebase, Android, and AI.

This is where platforms like Certificates.dev come in—offering certification programs specifically focused on frontend technologies, including Angular! Thanks to the platform’s authors and AngularSpace, I had the opportunity to go through the preparation process and certification for the Mid-Level Angular Developer. Thank you!

Let's start at the beginning - Who Am I?

I’m Adam, a developer with over a decade of experience. By profession, I am a Fullstack Developer—comfortable with both frontend and backend — but my passion lies in Angular, which I fell in love with from the first line of code. I worked extensively with AngularJS 1.3 (yes, really!), I remember the monumental release of Angular 2.0, and now I closely follow every new Angular release with excitement. Angular is my primary tool for work and my first choice for new web projects.

Yet, I don't feel entirely confident in my Angular knowledge. There are some concepts I understand in my own way, and others I need further explanation on. Certificates.dev offers a training program and certification for developers, not only at the Mid-Level but also at Junior and Senior levels!

Beyond Angular certifications, the platform also provides certification paths for Vue.js, JavaScript, and Nuxt, with upcoming courses for React, TypeScript, and even Tailwind! All certifications are created in collaboration with experts and platform creators, making it a comprehensive offer worth considering!

Certificates.dev Review: Mid-level Angular Developer

Day 0: First look at panel

My first impression after logging in was very positive! The dashboard’s color scheme references Angular’s rebranding. The interface is clear: on the left, there’s a panel with available functions and user information, while on the right, descriptions of available actions. The first thought that crossed my mind was, “It’s possible to create a great looking Angular themed page without using Material Design, right?”

Certificates.dev Review: Mid-level Angular Developer

I was also pleasantly surprised by the mobile view. When I'm choosing a course, it's important for me to know which platforms I can access it from. Just like with news, sometimes I read on a larger screen during a work break, while at other times, I want to glance through something while I'm on the road (as a passenger, of course!) or before sleep. Surprisingly, the site is highly responsive.

Certificates.dev Review: Mid-level Angular Developer

First day of training, and another days...

I wasn’t confident enough to take the full exam right away, so the next evening, I decided to go through some training lessons. The training section follows the same dashboard theme, with progress tracking on the left and main content on the right—everything remains clear and responsive.

Certificates.dev Review: Mid-level Angular Developer

What surprised me was the format of the lessons.

Certificates.dev Review: Mid-level Angular Developer

Above, you can see an example chapter dedicated to Signals in Angular. This is a completely different approach compared to other courses I’ve encountered.

I expected long-form articles or perhaps videos explaining the topic. Instead, we get a brief description of the lesson topic and a set of links to relevant resources.

At first, I was a bit confused by this approach. Some Angular topics are so complex that entire conference talks are dedicated to them—yet here, we only get a summary and a list of links. Does this mean the training is incomplete? Actually, no!

It took me some time to get used to this format. The links provided in the lessons mostly lead to the official Angular documentation, blog posts from Angular Training, or Medium articles written by Alain Chautard under the Angular Training brand. This is an interesting way to present training material: rather than lengthy lessons, the platform provides a knowledge base—concise, high-quality content that can be read in just a few minutes.

This is how an example article looks, it's about "Standalone Components":

Certificates.dev Review: Mid-level Angular Developer

One might ask, "If these links are publicly available, why should I pay for someone who aggregates them into chapters?" My answer is simple: time-saving. The internet is full of articles of varying quality. In my experience, most of them are written as tutorials, where before getting to the core topic, you must wade through installation instructions for Node.js, Angular CLI, project setup, and library installations. Here, the provided content is curated by Google Developer Experts — free from unnecessary fluff focused on the topic. The authors assume you already have some knowledge, but a few paragraphs can refine and deepen your understanding.

However, this approach does have a drawback: taking this course on a mobile device becomes inconvenient. While I tried to complete lessons during breaks, opening and closing multiple browser tabs wasn’t comfortable. In the end, I had to finish certain lessons on my desktop.

From time to time, quizzes also appear!

Certificates.dev Review: Mid-level Angular Developer

The quiz questions are relevant and refer to the topics covered in the chapter. I didn’t notice any mistakes, and they serve as a great way to reinforce learning.

Additionally, the course includes several challenges where you download a project and modify it according to given requirements.

Certificates.dev Review: Mid-level Angular Developer

However, I missed one crucial feature — a way to verify if my implementation was correct. I understand that developing an automated grading system is time consuming and costly, implementing unit tests could be a solution. Instead of manually comparing projects, users could write code, test it in a browser, and run unit tests to check their work.

Completing the training took me a few days, spending about an hour daily reading articles, solving quizzes, and completing programming tasks. I believe a highly motivated person could finish it in just one day!

Certificates.dev Review: Mid-level Angular Developer

What was missing? It seems to me that a person at the “Mid” level should have a basic understanding of @ngrx/store, I missed a dedicated lesson that could describe this - even as an "add-on" that would not be included in the exam. I would have also supplemented chapter five on Dependency Injection, mentioned InjectionToken, "factory" in providers for component, module, and application. This was not present, perhaps these are advanced topics, but such knowledge is very welcome.

It's time to Exam!

After completing the entire training, we have the opportunity to take a "trial exam." This is a shortened version of what we can expect in the main exam—fewer questions, simpler coding tasks, and less time to complete them. Honestly, after the training, I felt confident and went straight to the main exam. The moment I started it, my heart began to race...

The exam takes itself very seriously. At the beginning, we are instructed on how the test works and how we should prepare for it. We are required to turn on our webcam and microphone, turn off our phone, and disconnect any additional screens from our device if applicable. External software verifies our identity, asks us to show our desk to check for any cheating attempts, and throughout the exam, we are continuously monitored — our webcam, microphone, and screen are recorded. These verification methods are not used for the trial exam, so for someone experiencing this for the first time, it can be nerve-wracking!

The exam is divided in two parts. The first part consists of a test with 40 closed questions. The questions are, of course, related to the knowledge gained during the training, primarily focusing on Angular. However, in my case, there were also a few questions related to JavaScript/TypeScript that required some thought. This section has a time limit of 30 minutes, and at least 29 correct answers are required to pass.

The second part is a practical exam, where we have to complete two tasks. In the first task, we must find and fix a bug in an existing application. The exam creators suggest spending about 15 minutes on this (it took me 25 minutes — shame on me!). The second task involves implementing certain functionalities in an application according to the given requirements. Fortunately, the application already includes templates and necessary data (which we would normally fetch from an API), so we only need to focus on implementing the logic in TypeScript. This part turned out to be quite manageable for me and I finished with 30 minutes to spare (not shame on me!).

Before the exam, we have access to a short video explaining how the practical section works, but I’ll describe it for those interested. Before starting, we receive a full description of what is expected from us. We work with an editor using StackBlitz, which includes all the necessary project files. This means we cannot use our preferred IDE, but instead, we get a toolbar with quick access to task requirements, the Angular documentation, and the MDN documentation. Clicking on these resources opens them in a "SideNav," which is a very well thought out solution. It keeps the code in view, prevents cheating, and best of all, does not require us to learn the entire documentation to pass the exam. If needed, we can quickly reference it during the exam. In my case, I had to use one of Angular’s built-in Pipes, which I had never used before. A quick look at the documentation helped me understand its parameters and implement the required functionality!

Once the exam ends, we are no longer tracked. I breathed a sigh of relief when I saw the message: "Thank you for completing the exam." The same message informed me that the grading process would take about five business days. However, the next morning, I checked my email and saw a message from Certificates.dev that...

I'm Certified Mid-Level Angular Developer!

Certificates.dev Review: Mid-level Angular Developer

The certificate can be downloaded, shared on social media, and added to the "Licenses & Certifications" section of your LinkedIn profile to earn some likes from colleagues.

Summary

Would I recommend this platform for learning and certifying Angular skills? Absolutely! The training provides a solid dose of knowledge along with high quality articles from experts who truly know that stuff. The Mid-Level certification is well-balanced, and it covers exactly the topics I would expect to discuss with another "Mid-Level developer".

Once again, I would like to thank AngularSpace for the opportunity to test and verify my knowledge, but most importantly, a huge thanks to the creators of Certificates.dev — this platform is a remarkable achievement with a unique yet effective approach to delivering and validating knowledge. I hope that the certification I obtained will give me more confidence when discussing Angular topics.

I will keep an eye on this platform and will certainly take additional lessons from it. I'm rooting for the continued development of this platform and looking forward to even more certification options! I have already signed up for the React course, and in the future, I will likely go for the "Senior Angular Developer" training and certification — so see you soon!


Certificates.dev Review: Mid-level Angular Developer

Certificates.dev Review: Mid-level Angular Developer

]]>
<![CDATA[From AngularConnect to AngularDisconnect - Giveaway Cancelled]]>Hi everyone,

I'm really disappointed to share that I need to cancel the giveaway for the 90% discounted AngularConnect tickets.

This is the first time something like this has happened. I’ve organized many giveaways before and they’ve always gone smoothly.

Based on the original

]]>
https://www.angularspace.com/from-angularconnect-to-angulardisconnect-giveaway-cancelled/6874337459e8470001914a0dTue, 15 Jul 2025 07:56:47 GMT

Hi everyone,

I'm really disappointed to share that I need to cancel the giveaway for the 90% discounted AngularConnect tickets.

This is the first time something like this has happened. I’ve organized many giveaways before and they’ve always gone smoothly.

Based on the original communication I received from the organizers, there was no mention that the discount would be limited to in-person tickets only.

The promo banner I received (which I thankfully hadn't used) also mentioned that the 90% discount codes should apply to workshops as well. Guess what? They didn't - but I managed to catch that beforehand. Just sounded like too good of a deal since workshops are expensive.

From AngularConnect to AngularDisconnect - Giveaway Cancelled

With Conference however - Naturally, I assumed it would work for online tickets as well - just like many similar codes do - and I would never have expected this to be an issue.

After selecting winners (both of whom chose the online ticket option), I tested the codes only to find out they didn't apply. I reached out for clarification, explained the situation, and even offered to help cover the cost if needed ( Even though Online tickets are cheaper ).

Unfortunately, the organizers decided to stick firmly to their policy and refused to make an exception, despite the miscommunication on their side and the fact that winners were already selected.

I respect the right to set terms by Organizers - it's their Conference after all, but I do believe this could have been communicated more clearly from the start and handled with more flexibility.

Thank you to everyone who participated in the giveaway, supported it, or looked forward to attending.

I hope AngularConnect is going to end up being an amazing conference and wishing them all the best - but in these circumstances Angular Space needs to withdraw from community partnership.

I'm truly sorry for the inconvenience, especially to the winners - I’ll be reaching out to you directly with info about the consolation prize that I have personally prepared.

If anyone has questions, feel free to message me.

]]>