<![CDATA[Labbots]]>https://labbots.com/https://labbots.com/favicon.pngLabbotshttps://labbots.com/Ghost 3.41Sat, 21 Mar 2026 11:32:25 GMT60<![CDATA[Terraform Google site verification]]>https://labbots.com/terraform-google-site-verification/6048be787a08d71229dec7b1Wed, 10 Mar 2021 13:14:40 GMT

Terraform does not have native support to perform google site verification. There is a 3rd party provider which has support to perform site verification using TXT record. The provider does not support site verification using alternative DNS CNAME.

Provider Github : https://github.com/hectorj/terraform-provider-googlesiteverification

Terraform registry : https://registry.terraform.io/providers/hectorj/googlesiteverification/latest

Usage

Add 3rd party provider to your terraform configuration.

terraform {
  required_providers {
    googlesiteverification = {
      source = "hectorj/googlesiteverification"
      version = "0.4.2"
    }
  }
}

Configure terraform for site verification using site verification api and google Cloud DNS

# Variables created for demonstration purpose. These values can be loaded from data providers.
variable subdomain = { default="test" }
variable google_dns_managed_zone_name = { default="example-zone-name" }
variable google_dns_managed_zone_dns_name = { default="example.com" }

# Create new service account
resource "google_service_account" "siteverifier" {
  account_id   = "google-site-verifier"
  display_name = "Google Site verification account"
}

# Generate service account key
resource "google_service_account_key" "siteverifier" {
  service_account_id = google_service_account.siteverifier.name
}

# Initialise provider with service account key
provider googlesiteverification {
  credentials = base64decode(google_service_account_key.siteverifier.private_key)
}

# Enable site verification api
resource "google_project_service" "siteverification" {
  service = "siteverification.googleapis.com"
}

# Request for DNS token from site verification API
data "googlesiteverification_dns_token" "run_sub_domain" {
  domain     = "${var.subdomain}.${var.google_dns_managed_zone_dns_name}"
  depends_on = [google_project_service.siteverification]
}

# Create new DNS record in cloud DNS with the verification token returned from googlesiteverification_dns_token
resource "google_dns_record_set" "run_sub_domain" {
  managed_zone = var.google_dns_managed_zone_name
  name         = data.googlesiteverification_dns_token.run_sub_domain.record_name
  rrdatas      = [data.googlesiteverification_dns_token.run_sub_domain.record_value]
  type         = data.googlesiteverification_dns_token.run_sub_domain.record_type
  ttl          = 60
}

# Request google to verify the newly added verification record
resource "googlesiteverification_dns" "run_sub_domain" {
  domain     = "${var.subdomain}.${var.google_dns_managed_zone_dns_name}"
  token      = data.googlesiteverification_dns_token.run_sub_domain.record_value
}

Cons

  1. The provider seams to not read authentication credentials from gcloud application default and require to pass the credentials as part of the provider initialization.
  2. Only way I got the provider to work is through creating a new service account which seams to work but can't figure out why it doesn't work with default gcloud credentials which has owner access.
  3. The provider only support DNS_TXT verification method and does not support CNAME verification method. Support for DNS_CNAME is required for subdomain which has CNAME already configured.
  4. The provider does not have any useful documentation to refer as of 0.4.2 version of the provider. Hopeful to see some documentations in future releases.
]]>
<![CDATA[Enabling touch ID for access on Terminal]]>https://labbots.com/enabling-touch-id-for-access-on-terminal/6033a02d684fcf10da670525Mon, 22 Feb 2021 12:45:18 GMT

Mac pro have fingerprint scanner (Touch ID) to simplify login process but this is not exposed in terminal. So each time you run commands with elevated privileges you need to type in your password. Following simple change would allow you to use Touch ID for authentication in Command prompt.

To use MacOS Touch ID in Terminal for sudo access instead of entering system password.

Edit this file /etc/pam.d/sudo with your favourite editor.

sudo vim /etc/pam.d/sudo

Add the following line to the top of the file.

auth sufficient pam_tid.so

To enable Touch ID access on Iterm2. You need to do the following.
Go to Prefs -> Advanced -> Allow sessions to survive logging out and back in and set value to no.

Restart Iterm2 and voilà touch ID authentication will work on Iterm2.

]]>
<![CDATA[Measure CPU and GPU temperature of Raspberry PI]]>https://labbots.com/rpi-measure-cpu-gpu-temperature/5f5bb1c0684fcf10da6704deFri, 11 Sep 2020 17:40:00 GMTMeasure CPU and GPU temperature of Raspberry PI

A simple script to measure the temperature of CPU and GPU of Raspberry Pi. The below script is tested on Raspberry Pi model 2, 3B and 3B+.

The script uses vcgencmd to get GPU temperature from the video core and reads CPU temperature from sysfs

Running the script

  1. Download the script from gist to the Raspberry Pi.
  2. Set appropriate permission to the file.
    chmod 775 pi-temp-measure.sh
    
  3. Run the script
    ./pi-temp-measure.sh
    

The script provides an output as follows

-------------------------------------------
Fri 11 Sep 18:17:35 BST 2020 @ pi-home
-------------------------------------------
GPU => 39.2'C
CPU => 38.6'C

]]>
<![CDATA[Ubuntu 18.04 installation with LUKS and LVM]]>https://labbots.com/ubuntu-18-04-installation-with-luks-and-lvm/5f368ff99d6dfc0e9c363ed9Wed, 05 Jun 2019 21:24:00 GMT

Installation Process

Ubuntu 18.04 installation with LUKS and LVM

Pre-installation from live OS

This setup of Ubuntu with LUKS and LVM is tested on Ubuntu 18.04.

Boot Ubuntu from a Live OS and select the option to try Ubuntu without installing. Follow the steps I've outlined below. Let's assume you're installing to /dev/nvme0n1.

  1. Partition the drive with your tool of choice: I used gparted to set mine up.
    • Make sure the drive in which we are about to install is completely unallocated.
    • The first partition must always be the ESP partition. Set the following fields:
      • Free space preceding - Change only if required (it might not accept zero)
      • New Size - 550MiB
      • Free space following - (will be calculated automatically)
      • Align to - MiB
      • Partition Name - EFI System Partition
      • File System - fat32
      • Label - ESP
    • Press Add, and then the big green tick and "Apply".
    • Right-click your new partition (with the name "EFI System Partition") and select "Manage Flags".
    • Select "esp", which will automatically change a couple of other flags. Press Close.
    • The next partition would be Boot partition. Set the following fields:
      • Free space preceding - Automatic value
      • New Size - 1024 MiB
      • Free space following - (will be calculated automatically)
      • Align to - MiB
      • Partition Name - boot
      • File System - ext4
      • Label - boot
    • The next partition would be Encryption partition. Set the following fields:
      • Free space preceding - Automatic value
      • New Size - Entire space available
      • Free space following - (will be calculated automatically)
      • Align to - MiB
      • Partition Name - system
      • File System - cleared
      • Label - system
  2. The resulting partition table will look as follows:
    • nvme0n1p1: EFI partition 550 MiB
    • nvme0n1p2: /boot (1G)
    • nvme0n1p3: LUKS partition (the rest of the disk)
  3. Setup LUKS
    • sudo cryptsetup luksFormat --hash=sha512 --key-size=512 --cipher=aes-xts-plain64 --verify-passphrase /dev/nvme0n1p3
    • sudo cryptsetup luksOpen /dev/nvme0n1p3 CryptDisk
    • While not necessary, it is a good idea to fill your LUKS partition with zeros so that the partition, in an encrypted state, is filled with random data. sudo dd if=/dev/zero of=/dev/mapper/CryptDisk bs=4M BEWARE, this could take a really long time!
  4. Setup LVM on /dev/mapper/CryptDisk
    • sudo pvcreate /dev/mapper/CryptDisk
    • sudo vgcreate vg0 /dev/mapper/CryptDisk
    • sudo lvcreate -n swap -L 20G vg0
    • sudo lvcreate -n root -l +100%FREE vg0

Installation from live OS

  1. Now you're ready to install. When you get to the "Installation type" portion of the install, choose the "Something else" option. Then manually assign the /dev/mapper/vg0-* partitions as you would like to have the configured. Don't forget to set /dev/nvme0n1p2 as /boot. the /boot partition must not be encrypted. If it is, we won't be able to boot.
  2. Press the "Change…" button and assign boot, swap and root (/) partition to installation partitions
  3. Change the "Device for boot loader installation" to /dev/nvme0n1, and continue with installation.
  4. When installation is complete, don't reboot! Choose the option to "Continue Testing".

Post-installation configuration from live OS

  1. In a terminal, type the following and look for the UUID of /dev/nvme0n1p3. Take note of that UUID for later.

    • sudo blkid | grep LUKS
    • The important line on my machine reads /dev/nvme0n1p3: UUID="bd3b598d-88fc-476e-92bb-e4363c98f81d" TYPE="crypto_LUKS" PARTUUID="50d86889-02"
  2. Next lets get the newly installed system mounted again so we can make some more changes.

    • sudo mount /dev/vg0/root /mnt
    • sudo mount /dev/nvme0n1p2 /mnt/boot
    • sudo mount --bind /dev /mnt/dev
    • sudo mount --bind /run/lvm /mnt/run/lvm
    • sudo mount /dev/nvme0n1p1 /mnt/boot/efi
  3. Now run sudo chroot /mnt to access the installed system

  4. From the chroot, mount a couple more things
    - mount -t proc proc /proc
    - mount -t sysfs sys /sys
    - mount -t devpts devpts /dev/pts

  5. Setup crypttab. Using your favorite text editor, create the file /etc/crypttab and add the following line, changing out the UUID with the UUID of your disk.
    - CryptDisk UUID=bd3b598d-88fc-476e-92bb-e4363c98f81d none luks,discard

  6. Lastly, rebuild some boot files.
    - update-initramfs -k all -c
    - update-grub

  7. Reboot, and the system should ask for a password to decrypt on boot!


Enabling System Hibernation

Configuring encrypted Swap

  1. Identify the Swap partition path by viewing the fstab.
    • cat /etc/fstab
    • The swap path would look something like /dev/mapper/vg0-swap
  2. Create a resume file in initramfs so the swap can be loaded at boot.
    • sudo gedit /etc/initramfs-tools/conf.d/resume
    • Add the following line to the file and save it RESUME=/dev/mapper/vg0-swap
  3. Add the same value to the grub
    • sudo gedit /etc/default/grub
    • GRUB_CMDLINE_LINUX_DEFAULT="quiet splash resume=/dev/mapper/vg0-swap"
  4. Update kernel image and grub
    sudo update-initramfs -u -k all
    sudo update-grub

Enabling Hibernate

  1. Test whether hibernate is supported in your system by manually running the hibernate command from the terminal
    sudo systemctl hibernate

  2. If the hibernate works as expected then open the following snippet to the file.

    • sudo gedit /etc/polkit-1/localauthority/50-local.d/com.ubuntu.enable-hibernate.pkla
  3. Add the following snippet to the file and save it.


[Re-enable hibernate by default in upower]
Identity=unix-user:*
Action=org.freedesktop.upower.hibernate
ResultActive=yes

[Re-enable hibernate by default in logind]
Identity=unix-user:*
Action=org.freedesktop.login1.hibernate;org.freedesktop.login1.handle-hibernate-key;org.freedesktop.login1;org.freedesktop.login1.hibernate-multiple-sessions;org.freedesktop.login1.hibernate-ignore-inhibit
ResultActive=yes

  1. Restart system after modifying the configuration.
  2. Install the Hibernate status button gnome extension to add hibernate button to the GUI.

Enabling PM Utils

  1. Install PM Utils using the following command.
    sudo apt install --assume-yes --quiet pm-utils
  2. Check if your system supports hybrid suspend
    sudo pm-is-supported --suspend-hybrid && echo 'Hybrid suspend available' || echo 'Hybrid suspend NOT supported'
  3. If hybrid suspend is supported then add the following lines to /etc/systemd/logind.conf

HandleSuspendKey=hybrid-sleep
HandleLidSwitch=hybrid-sleep


Nvidia Graphic driver issue

I had issues with suspend and hibernate when using Nvidia graphic driver (Quadro p1000). If you encounter such issues. Then add the following line to /etc/default/grub

GRUB_CMDLINE_LINUX="nouveau.blacklist=1 acpi_rev_override=1 acpi_osi=Linux acpiphp.disable=1 nouveau.modeset=0 pcie_aspm=force drm.vblankoffdelay=1 scsi_mod.use_blk_mq=1 nouveau.runpm=0 mem_sleep_default=deep"

Once the configuration is saved then run the following command to refresh grub
sudo update-grub


References

  1. Custom encryption setup on Ubuntu 18.04
  2. Manual full system encryption on Ubuntu 18.04
  3. Enable Hibernation on Ubuntu 18.04
  4. Script to LUKS partioning installation
  5. Guide on encrypted ubuntu installation with LUKS and LVM
  6. Fix for suspend issue with Nvidia graphic driver in Ubuntu 18.04
  7. Installing Nvidia graphics driver in Ubuntu 18.04
  8. Method to disable Nouveau Nvidia driver
]]>
<![CDATA[Bash script to Switch PHP Versions in Ubuntu]]>https://labbots.com/bash-script-to-switch-php-versions-in-ubuntu/5f368ff99d6dfc0e9c363ed8Thu, 19 Oct 2017 09:13:56 GMTBash script to Switch PHP Versions in Ubuntu

I found myself constantly needing to switch PHP versions in my development environment based on the projects I work on. I wanted to have a simple way to switch php versions and all the tools I found were heavy weight or to complicated for my purpose. So I wrote a simple bash script which provides a list of available PHP versions and allows you to switch between those versions. This script is written for Ubuntu and tested in Ubuntu 16.04.

Basic Usage

./switchPhp.sh <php_version>

You can download the script from my github gist.

]]>
<![CDATA[My Favorite Bash aliases that increases productivity]]>https://labbots.com/my-favorite-bash-aliases-that-increases-productivity/5f368ff99d6dfc0e9c363ed7Fri, 11 Nov 2016 15:35:35 GMT

Intoduction

My Favorite Bash aliases that increases productivity

I have been using Linux (Ubuntu) well over a decade and I fell in love with command line interface which gives you more control and power over the system.Any one who loves linux would probably agree with me. The more you use command line, you will tend to notice that you use some commands more often on day to day basis that others. So for such commands bash provides a way to create custom shortcuts and this is a time saver. I cant imagine myself without my bash aliases.

Here are some of the handy aliases that I use in my Ubuntu environment.

The complete bash aliases and the help document script can be found on my gist page

Setup

Setup of bash aliases is so simple in Ubuntu operating system. All that is required is to create a .bash_aliases file in the home directory of the user.

 touch $HOME/.bash_aliases

Declaration of alias is also simple. You can just open the newly created bash_aliases file in your favourite editor and add your aliases. The sytax looks like the following

alias alias_name="command_to_run"

General

Simple directory traversal alias that would save you heck lot of time

alias ..='cd ..'
alias ...='cd ../../'

We can also customized ls command aliases to make it more useful

#lists contents of current directory with file permisions
alias ll='ls -l -sort'

#list all directories in current directories
alias ldir='ls -l | grep ^d'

# List directory and pipe output to less. Makes viewing of large directory easy 
alias lsl="ls -lhFA | less"

Open the working directory in GUI file explorer using Nautilus

#Opens current directory in a file explorer
alias explore='nautilus .'

#Opens current directory in a file explorer with super user privileges
alias suexplore='sudo nautilus .'

Open current directory in Ubuntu's Disk Usage Analyzer GUI with super user privileges in the background

alias analyze='gksudo baobab . &'

Opens a GUI text editor in the background

#Opens a GUI text editor in the background. Can obviously be replaced with your favorite editor
alias text='gedit &'
#Same as above with super user privileges
alias sutext='gksudo gedit &'

Open a file with whatever program would open by double clicking on it in a GUI file explorer. Requires gnome-open to be installed

alias try='gnome-open'

Find files in current directory

alias fhere="find . -name "

Search process in process table

alias psg="ps aux | grep -v grep | grep -i -e VSZ -e"

Aliasing for Alias

We could also create some aliases to diplay the list of configured aliases, edit aliases and load them without having to logout of the system.

Here is the alias to list all aliases. This comes in handy if you ever forget your aliases

alias a='echo "------------Your aliases------------";alias'

Edit your aliases using your favourite editor

alias via='gksudo gedit ~/.bash_aliases &'

Load your aliases after adding new one in the file

alias sa='source ~/.bash_aliases;echo "Bash aliases sourced."'

IP address

If you are like me into networking and web development. You would most probably wanted to check your public IP address or if using VPN you would like to check the location of your IP address from VPN. The following aliases will make it easier for you to check those information

# Get your public ip address
alias ip='curl icanhazip.com'
# Get the location of your public IP address
alias iploc="curl -s http://whatismycountry.com/ |   sed -n 's|.*> *\(.*\)</h3>|\1|p'"
# Get complete information of your public IP address
alias ipinfo='curl ipinfo.io'

Restart network service

In ubuntu I have a annoying problem of network disruption due to a old network card and requires restarting network-manager to reconnect to the internet. So here is a alias that I created to restart the network service.

#Restart network manager
alias netstart='sudo /usr/sbin/service network-manager restart'

To make sure there is no password prompt when running the alias, you could add the following entry to the sudoers list

#No password for network-manager service.(Replace your_username with your system username
your_username ALL=NOPASSWD: /usr/sbin/service network-manager restart

tldr;

Here is the complete list of some useful aliases that you can simple use by creating .bash_aliases file in the home directory.

If you are looking to find the most used commands. You could search your history for most commonly used commands. The following one liner would be useful.

history | awk '{CMD[$2]++;count++;}END { for (a in CMD)print CMD[a] " " CMD[a]/count*100 "% " a;}' | grep -v "./" | column -c3 -s " " -t | sort -nr | nl |  head -n10


Reference

  1. Introduction to Useful Bash Aliases - DigitalOcean.
]]>
<![CDATA[Networking Interview Questions]]>
  1. What is subnet and what are the benefits of subnetting?

Used in IP Networks to break up larger networks into smaller subnetworks.
Subnetting helps reduce network traffic and the size of the routing tables. It’s also a way to add security to network traffic by isolating it from the

]]>
https://labbots.com/networking-interview-questions/5f368ff99d6dfc0e9c363ed6Thu, 09 Jun 2016 16:25:20 GMT
  1. What is subnet and what are the benefits of subnetting?
Networking Interview Questions

Used in IP Networks to break up larger networks into smaller subnetworks.
Subnetting helps reduce network traffic and the size of the routing tables. It’s also a way to add security to network traffic by isolating it from the rest of the network.

  1. What can you tell me about the OSI Reference Model?

The OSI Reference Model provides a framework for discussing network design and operations. It groups communication functions into 7 logical layers, each one building on the next.

Networking Interview Questions

  • Layer 7: The application layer This is the layer at which communication partners are identified (Is there someone to talk to?), network capacity is assessed (Will the network let me talk to them right now?), and that creates a data to send or opens the data received. (This layer is not the application itself, it is the set of services an application should be able to make use of directly, although some applications may perform application layer functions.)

  • Layer 6: The presentation layer. This layer is usually part of an operating system (OS) and converts incoming and outgoing data from one presentation format to another (for example, from clear text to encrypted text at one end and back to clear text at the other).

  • Layer 5: The session layer. This layer sets up, coordinates and terminates conversations. Services include authentication and reconnection after an interruption. On the Internet, Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) provide these services for most applications.

  • Layer 4: The transport layer. This layer manages packetization of data, then the delivery of the packets, including checking for errors in the data once it arrives. On the Internet, TCP and UDP provide these services for most applications as well.

  • Layer 3: The network layer. This layer handles the addressing and routing of the data (sending it in the right direction to the right destination on outgoing transmissions and receiving incoming transmissions at the packet level). IP is the network layer for the Internet.

  • Layer 2: The data-link layer. This layer sets up links across the physical network, putting packets into network frames. This layer has two sub-layers, the Logical Link Control Layer and the Media Access Control Layer. Ethernet is the main data link layer in use.

  • Layer 1: The physical layer. This layer conveys the bit stream through the network at the electrical, optical or radio level. It provides the hardware means of sending and receiving data on a carrier network.

  1. What is a 3-way handshake?

A three-way-handshake is a method used in a TCP/IP network to create a connection between a local host/client and server. It is a three-step method that requires both the client and server to exchange SYN and ACK (acknowledgment) packets before actual data communication begins.

  • A client node sends a SYN data packet over an IP network to a server on the same or an external network. The objective of this packet is to ask/infer if the server is open for new connection.
  • The target server must have open ports that can accept and initiate new connections. When the server receives the SYN packet from the client node, it responds and returns a confirmation receipt - the ACK packet or SYN/ACK packet.
  • The client node receives the SYN/ACK from the server and responds with an ACK packet.

Networking Interview Questions

  1. How does traceroute work?

Traceroute is a program that shows you route taken by packets through a network. Traceroute sends a UDP packet to the destination taking advantage of ICMP’s messages.

  • Traceroute creates a UDP packet from the source to destination with a TTL(Time-to-live) = 1
  • The UDP packet reaches the first router where the router decrements the value of TTL by 1, thus making our UDP packet’s TTL = 0 and hence the packet gets dropped.
  • Noticing that the packet got dropped, it sends an ICMP message (Time exceeded) back to the source.
  • Traceroute makes a note of the router’s address and the time taken for the round-trip.
  • It sends two more packets in the same way to get an average value of the round-trip time. Usually, the first round-trip takes longer than the other two due to the delay in ARP finding the physical address, the address stays in the ARP cache during the second and the third time and hence the process speeds up.
  • The steps that have occurred uptil now, occur again and again until the destination has been reached. The only change that happens is that the TTL is incremented by 1 when the UDP packet is to be sent to next router/host.
  • Once the destination is reached, Time exceeded ICMP message is NOT sent back this time because the destination has already been reached.
  • But, the UDP packet used by Traceroute specifies the destination port number to be one that is not usually used for UDP. Hence, when the destination computer verifies the headers of the UDP packet, the packet gets dropped due to improper port being used and an ICMP message (this time – Destination Unreachable) is sent back to the source.
  • When Traceroute encounters this message, it understands that the destination has been reached. Even the destination is reached 3 times to get the average of the round-trip time.
  1. How does DNS work? Have you ever had DNS go down? When should you have backup DNS – have you ever had to set this up for a website?

When you visit a domain such as xyz.com, your computer follows a series of steps to turn the human-readable web address into a machine-readable IP address. This happens every time you use a domain name, whether you are viewing websites, sending email or listening to Internet radio stations like Pandora.

  • Step 1: Request information
    The process begins when you ask your computer to resolve a hostname, such as visiting http://xyz.com. The first place your computer looks is its local DNS cache, which stores information that your computer has recently retrieved.
    If your computer doesn’t already know the answer, it needs to perform a DNS query to find out.

  • Step 2: Ask the recursive DNS servers
    If the information is not stored locally, your computer queries (contacts) your ISP’s recursive DNS servers. These specialized computers perform the legwork of a DNS query on your behalf. Recursive servers have their own caches, so the process usually ends here and the information is returned to the user.

    • Step 3: Ask the root nameservers
      If the recursive servers don’t have the answer, they query the root nameservers. A nameserver is a computer that answers questions about domain names, such as IP addresses. The thirteen root nameservers act as a kind of telephone switchboard for DNS. They don’t know the answer, but they can direct our query to someone that knows where to find it.

    • Step 4: Ask the TLD nameservers
      The root nameservers will look at the first part of our request, reading from right to left — www.xyz.com — and direct our query to the Top-Level Domain (TLD) nameservers for .com. Each TLD, such as .com, .org, and .us, have their own set of nameservers, which act like a receptionist for each TLD. These servers don’t have the information we need, but they can refer us directly to the servers that do have the information.

    • Step 5: Ask the authoritative DNS servers
      The TLD nameservers review the next part of our request — www.xyz.com — and direct our query to the nameservers responsible for this specific domain. These authoritative nameservers are responsible for knowing all the information about a specific domain, which are stored in DNS records. There are many types of records, which each contain a different kind of information. In this example, we want to know the IP address for www.xyz.com, so we ask the authoritative nameserver for the Address Record (A).

    • Step 6: Retrieve the record
      The recursive server retrieves the A record for xyz.com from the authoritative nameservers and stores the record in its local cache. If anyone else requests the host record for xyz.com, the recursive servers will already have the answer and will not need to go through the lookup process again. All records have a time-to-live value, which is like an expiration date. After a while, the recursive server will need to ask for a new copy of the record to make sure the information doesn’t become out-of-date.

    • Step 7: Receive the answer
      Armed with the answer, recursive server returns the A record back to your computer. Your computer stores the record in its cache, reads the IP address from the record, then passes this information to your browser. The browser then opens a connection to the webserver and receives the website.

This entire process, from start to finish, takes only milliseconds to complete.

The major point in having a secondary DNS server is as backup in the event the primary DNS server handling your domain goes down. In this case, your server would be still up, and so without having a backup, nobody could get to your server possibly costing you lots of lost customers.
A secondary DNS server is always up, and ready to serve. It can help balance the load on the network as there are now more than one authoritative place to get your information. Updates are generally performed automatically from the master DNS. Thus it is an exact clone of the master.

  1. What is a disaster recovery plan? Have you ever created one? In your opinion, how (and how often) should you test your plan?

Disaster recovery plan, DRP is a plan for business continuity in the event of a disaster that destroys part or all of a business's resources, including IT equipment, data records and the physical space of an organization.
DRP is a documented process or set of procedures to recover and protect a business IT infrastructure in the event of a disaster. Such a plan, ordinarily documented in written form, specifies procedures an organisation is to follow in the event of a disaster. It is "a comprehensive statement of consistent actions to be taken before, during and after a disaster." The disaster could be natural, environmental or man-made. Man-made disasters could be intentional or unintentional.
The frequency of your tests may vary depending on the particular piece of the plan you’re working with. Many industries have implemented regulations that require DR solution testing to occur at least once a year. And many times, these regulations specify even more regular testing.
Personally I would suggest to carry out DR testing atleast twice in a year.
7. what is the minimum set of things you should monitor? What is the optimum set? What has worked well in practice?

Monitoring refers to the practice of collecting regular data regarding your infrastructure in order to provide alerts of unplanned downtime, network intrusion, and resource saturation. Monitoring also makes operational practices auditable, which is useful in forensic investigations and for determining the root cause of errors. Monitoring provides the basis for the objective analysis of systems administration practices and IT in general.

Minimum things to be monitored in a infrastructure would be basic server health monitoring cpu, memory, i/o, disk space, network and application monitoring.

  1. What is Proxy Server?

Most large businesses, organizations, and universities these days use a proxy server. This is a server that all computers on the local network have to go through before accessing information on the Internet. By using a proxy server, an organization can improve the network performance and filter what users connected to the network can access.

  1. What is DHCP?

Dynamic Host Configuration Protocol is the default way for connecting up to a network. The implementation varies across Operating Systems, but the simple explanation is that there is a server on the network that hands out IP addresses when requested. Upon connecting to a network, a DHCP request will be sent out from a new member system. The DHCP server will respond and issue an address lease for a varying amount of time. If the system connects to another network, it will be issued a new address by that server but if it re-connects to the original network before the lease is up- it will be re-issued that same address that it had before.

DHCP lease-generation is 4 step process called DORA which expands as below:

  • D – Discover
    The client broadcasts messages on the network subnet using the destination address 255.255.255.255 or the specific subnet broadcast address. A DHCP client may also request its last-known IP address. If the client remains connected to the same network, the server may grant the request. Otherwise, it depends whether the server is set up as authoritative or not. An authoritative server denies the request, causing the client to issue a new request. A non-authoritative server simply ignores the request, leading to an implementation-dependent timeout for the client to expire the request and ask for a new IP address.
  • O – Offer
    When a DHCP server receives a DHCPDISCOVER message from a client, which is an IP address lease request, the server reserves an IP address for the client and makes a lease offer by sending a DHCPOFFER message to the client. This message contains the client's MAC address, the IP address that the server is offering, the subnet mask, the lease duration, and the IP address of the DHCP server making the offer.
  • R – Request
    In response to the DHCP offer, the client replies with a DHCP request, broadcast to the server, requesting the offered address. A client can receive DHCP offers from multiple servers, but it will accept only one DHCP offer. Based on required server identification option in the request and broadcast messaging, servers are informed whose offer the client has accepted.When other DHCP servers receive this message, they withdraw any offers that they might have made to the client and return the offered address to the pool of available addresses.
  • A – Acknowledgement
    When the DHCP server receives the DHCPREQUEST message from the client, the configuration process enters its final phase. The acknowledgement phase involves sending a DHCPACK packet to the client. This packet includes the lease duration and any other configuration information that the client might have requested. At this point, the IP configuration process is completed.
  1. What is a subnet mask?

Subnet mask is a mask used to determine what subnet an IP address belongs to.An IP address has two components, the network address and the host address. A subnet mask separates the IP address into the network and host addresses.
A Subnet mask is a 32-bit number that masks an IP address, and divides the IP address into network address and host address.

  1. What are sticky ports?

You can use the port security feature to restrict input to an interface by limiting and identifying MAC addresses of the workstations that are allowed to access the port. When you assign secure MAC addresses to a secure port, the port does not forward packets with source addresses outside the group of defined addresses. If you limit the number of secure MAC addresses to one and assign a single secure MAC address, the workstation attached to that port is assured the full bandwidth of the port.

If a port is configured as a secure port and the maximum number of secure MAC addresses is reached, when the MAC address of a workstation attempting to access the port is different from any of the identified secure MAC addresses, a security violation occurs.

  1. What’s your experience of configuration management?
Configuration Management is the practice of handling changes systematically so that a system maintains its integrity over time. Chef and Puppet are some of the tools used to manage configuration changes of infrastructure. 

Advantages of configuration management include:

  • Streamlining the processes of maintenance, repair, expansion and upgrading.
  • Minimizing configuration errors.
  • Minimizing downtime.
  • Optimizing network security.
  • Ensuring that changes made to a device or system do not adversely affect other devices or systems.
  • Rolling back changes to a previous configuration if results are unsatisfactory.
  • Archiving the details of all network configuration changes.
  1. What is network configuration management?
Network configuration management (NCM) is the process of organizing and maintaining information about all the components of a computer network. When a network needs repair, modification, expansion or upgrading, the administrator refers to the network configuration management database to determine the best course of action. This database contains the locations and network addresses of all hardware devices, as well as information about the programs, versions and updates installed in network computers. NCM can be as simple as having all the configuration documented in spreadsheets or using enterprise level 3rd party tools to manage and analyse network configuration changes.
  1. What are a runt, Giant, and collision?
  • runt is a packet in a network that is too small.
  • Giant is a packet, frame, cell or any other transmission unit which is too large.
  • In a half duplex Ethernet network, a collision is the result of two devices on the same Ethernet network attempting to transmit data at exactly the same time. The network detects the "collision" of the two transmitted packets and discards them both.
  1. How does ping work?

The Internet Ping program works much like a sonar echo-location, sending a small packet of information containing an ICMP ECHO_REQUEST to a specified computer, which then sends an ECHO_REPLY packet in return. The IP address 127.0.0.1 is set by convention to always indicate your own computer. Therefore, a ping to that address will always ping yourself and the delay should be very short. This provides the most basic test of your local communications.

  1. What is broadcast storm?

Broadcast radiation is the accumulation of broadcast and multicast traffic on a computer network. Extreme amounts of broadcast traffic constitute a broadcast storm. A broadcast storm can consume sufficient network resources so as to render the network unable to transport normal traffic.

  1. What is the purpose of VRRP?

Virtual Router Redundancy Protocol (VRRP) enables you to set up a group of routers as a default gateway router (VRRP Group) for backup or redundancy purposes. This way, the PC clients can actually point to the IP address of the VRRP virtual router as their default gateway. If one of the master routers in the group goes down, one of the other routers can take over.

  1. How do you distinguish a DNS problem from a network problem?

If you're truly experiencing a DNS issue, your system will not be able to resolve host names (google.com) into IP addresses (74.125.225.78) which is what your computer really uses to communicate. A simple test to verify that this is the case is to go to your terminal and ping a host name and then try to ping an ip address (on the internet). If you're able to ping the IP address and not the FQDN then you've got yourself a DNS issue because your DNS provider is not translating that name to an IP.

You could also use NSLOOKUP command to resolve a FQDN to IP address. If the command does not resolve it to a IP address but you are able to ping the IP address directly, then there is a potential problem with the DNS.

  1. What is the difference between layer 2 and layer 3 devices?

A L2 switch does switching only. This means that it uses MAC addresses to switch the packets from a port to the destination port (and only the destination port). It therefore maintains a MAC address table so that it can remember which ports have which MAC address associated.

A L3 switch also does switching exactly like a L2 switch. The L3 means that it has an identity from the L3 layer. Practically this means that a L3 switch is capable of having IP addresses and doing routing. For intra-VLAN communication, it uses the MAC address table. For extra-VLAN communication, it uses the IP routing table.

  1. What is MTU?

A maximum transmission unit (MTU) is the largest size packet or frame, specified in octets (eight-bit bytes), that can be sent in a packet- or frame-based network such as the Internet. The Internet's Transmission Control Protocol (TCP) uses the MTU to determine the maximum size of each packet in any transmission. Too large an MTU size may mean retransmissions if the packet encounters a router that can't handle that large a packet. Too small an MTU size means relatively more header overhead and more acknowledgements that have to be sent and handled.


Reference

]]>
<![CDATA[Microsoft Exchange Interview Questions]]>
  1. Mention what are the new features in MS Exchange 2013?
  • Integration with Lync and SharePoint: With site mailboxes and in-place eDiscovery, it
    offers a greater integration with Microsoft Sharepoint and Lync
  • Provide a resilient solution: It built upon the exchange server 2010 and redesigned for
    simplicity of scale, feature isolation
]]>
https://labbots.com/microsoft-exchange-interview-questions/5f368ff99d6dfc0e9c363ed5Tue, 07 Jun 2016 18:28:34 GMT
  1. Mention what are the new features in MS Exchange 2013?
  • Integration with Lync and SharePoint: With site mailboxes and in-place eDiscovery, it
    offers a greater integration with Microsoft Sharepoint and Lync
  • Provide a resilient solution: It built upon the exchange server 2010 and redesigned for
    simplicity of scale, feature isolation and hardware utilization
  • Supports a multigenerational workforce: From multiple sources users can merge
    contacts as well as smart search allows to search people in the network
  • Provide an engaging experience: MS web app focused on a streamlined user
    interface that supports the use of touch, enhancing the use of mobile devices
  • Meet the latest demand: With improved search and indexing, you can search across
    Lynch 2013, Exchange 2013, SharePoint 2013, etc.
  • DAG system: A new evolution of exchange 2010 DAG
  1. Mention what is recommended when you are using an exchange account for your work, when you are offline?
Microsoft Exchange Interview Questions

It is suggested that you use Cached Exchange Mode when you are using an exchange account for your work, as it eliminates all the reason to work offline. With Cache Exchange Mode, you can keep continuing working even if you are disconnected with the network. Cache Exchange Mode uses a folder file (.ost) and manages a synchronized copy of the items in all folder in the mailbox, when you are offline. As soon as you are connected to the network, it syncs your data automatically to the server without losing any data.

  1. Mention what are the roles in MS exchange 2013?

In MS exchange 2013, there are two roles Client Access Server and Mailbox Server.

  1. Mention what is the role of Client Access Server?

The Client Access Server gives connectivity to various services like

  • Microsoft Office Outlook
  • Outlook Web App
  • Mobile devices
  • POP & SMTP
  • Accepts mail from delivers mail to other mail hosts on the internet
  • Gives unified namespace, network security and authentication
  • Handles all client requests for Exchange
  • Routes requests to the correct mailbox server
  • Allows the use of layer 4 (TCP affinity) routing
  1. Mention what is the role of Mailbox server?

Mailbox servers help in

  • e-mail storage
  • Host public folder databases
  • Host mailbox databases
  • Calculate e-mail address policies
  • Performs multi-mailbox searches
  • Provide high availability and site resiliency
  • Provide messaging records management and retention policies
  • Handle connectivity as clients don’t connect directly to the mailbox services
  • For given mailbox, it provides all core exchange functionality
  • When a database fails over, it also fails access to the mailbox
  1. Explain what are the important features of Transport Pipeline?

Transport pipeline is made up of three different services:

  • Front end transport service: It does basic message filtering based on domains, connectors, senders and recipients. It only connects with the transport service on a mailbox server and does not backlog any messages locally
  • Transport service: It runs on all mailbox servers, and it handles SMTP mail flow. It helps in message categorization and content inspection. The transport services routes messages between the Mailbox Transport service, the Transport Service and Front End Transport service. This service does not queue messages locally
  • Mailbox Transport: This system includes receiving and sending SMTP to the transport service from mailbox using RPC (Remote Procedure Call).
  1. Explain what is the role of categorizer?

Categorizer performs following functions

  • Recipient Resolution: The e-mail address of the recipient is resolved to decide whether the recipient has got a mailbox in the Exchange Organization or an external e-mail address
  • Routing Resolution: Once the information regarding the recipient is resolved, the ultimate destination for the mail is routed, and the next hop are determined
  • Content Conversion: Once the mail has reached its determined address, the SMTP is converted into readable format like HTML, rich text format or plain text
  1. Explain the term DAG (Data Availability Group)?

DAG or Data Availability Group is a framework build is MS Exchange 2013. It is a group of upto 16 mailbox server that hosts a set of databases and provides automatic database level recovery due to failure of servers of databases.

  1. Mention how many types of delivery groups found in MS Exchange 2013?

In MS Exchange 2013, there are five types of delivery groups

  • Routing DAG
  • Mailbox delivery groups
  • Connector source service
  • AD site
  • Server List
  1. Explain how message is delivered to the mailbox database in Exchange 2013?

In exchange 2013, after the message reaches the target mailbox server in the destination AD site, the transport service avails SMTP to carry the message to the mailbox. After that, using RPC, Transport Service delivers the message to the local mailbox.

  1. What action does Front End Transport service does?

Front end transport service does one of the following actions based on the number and type of the recipients

  • For the message with a single mailbox recipient, choose a mail box server in the target delivery group and based on the proximity of the AD site, give preference to the mail box server
  • For the message with multiple or several mailbox recipients, it uses the first 20 recipients to select a mailbox in a closest proximity or delivery group, based on the AD site proximity
  • If the message has no mailbox recipients, it choose a random mailbox server in the local AD site
  1. Mention what is the function of mailbox Transport Submission service?

Mailbox Transport Submission service does one of the following actions based on the number and type of recipients.

  • For the message having only one mailbox recipient, it chooses a mailbox server in the target delivery group and give priority to the mailbox server based on the AD site proximity
  • With multiple mailbox recipients, it uses first 20 recipients to choose a Mailbox server in the closest delivery group, based on the AD site proximity
  • If there is no mailbox recipient, select a mailbox server in the local delivery group.
  1. How the flow of the mail is tracked in MS Exchange 2013?

To track message flow in MS Exchange 2013, Delivery Reports are used. It is applicable for Outlook and Outlook web only. However, Message Tracking Logs are also helpful to know the flow of the mail.

  1. What are the prerequisites needed to install exchange Server 2013 SP1 (CPU, Memory, Disk & OS )
  • Microsoft Operating System: Windows Server 2012 R2, Windows Server 2012 and Windows Server 2008 R2 with
    Service Pack 1 (SP1) operating system

  • Components

    • Microsoft .NET Framework 4.5
    • Windows Management Framework 4.0
    • Remote Tools Administration Pack
    • ADLDS for Exchange Server 2013 Edge Server Role
  • Memory

    • Mailbox 8GB minimum
    • Client Access 4GB minimum
    • Mailbox and Client Access combined 8GB minimum
    • Edge Transport 4GB minimum
  • Disk space

    • At least 30 GB on the drive on which you install Exchange
    • An additional 500 MB of available disk space for each Unified Messaging (UM) language pack
    • 200 MB of available disk space on the system drive
    • A hard disk that stores the message queue database on with at least 500 MB of free space.
  1. Where Exchange Server stores the Exchange related information in Active Directory
  • Domain Partition – Mail enable recipient, groups and contact related to domain level are stored
  • Configuration Partition – Stores the Exchange configuration information like, policies, global settings, address list,
    connecters and it contains the information related to forest level
  • Schema Partition – stores the Exchange specific classes and attributes
  1. List out the purpose of running prepare schema and prepare AD switches in Exchange server 2013
  • Prepare Schema – After running the Prepare Schema switch, the Active directory will contain the classes and attributes
    required to support Exchange environment
  • Prepare AD – after running the Prepare AD switch, new container will be created to hold the details of the information
    from server to databases to connectors. This process also created universal security groups to manage Exchange and
    sets appropriate permissions on objects to allow them to be managed.
  1. What is the purpose of Autodiscover service & Availability service
  • Auto discover service— The Autodiscover service does the following:
    • Automatically configures user profile settings for clients running Microsoft Office Outlook 2007, Outlook
      2010, or Outlook 2013, as well as supported mobile phones.
    • Provides access to Exchange features for Outlook 2007, Outlook 2010, or Outlook 2013 clients that are
      connected to your Exchange messaging environment.
    • Uses a user's email address and password to provide profile settings to Outlook 2007, Outlook 2010, or
      Outlook 2013 clients and supported mobile phones. If the Outlook client is joined to a domain, the user's
      domain account is used.
  • Availability service—The Availability service is the replacement for Free/Busy functionality responsible for making a
    user’s calendar availability visible to other users making meeting requests.
    • Retrieve current free/busy information for Exchange 2013 mailboxes
    • Retrieve current free/busy information from other Exchange 2013 organizations
    • Retrieve published free/busy information from public folders for mailboxes on servers that have previous
      versions of Exchange
    • View attendee working hours
    • Show meeting time suggestions
  1. What are the DNS host record required to receiving email from the internet

A mail exchange (MX) record that contains information about which mail server the domain uses to receive mail.

  1. Explain the list of files will be there under Exchange 2013 database folder
  • *.edb File - A mailbox database is stored as an Exchange database (.edb) file.
  • Checkpoint file .chk, keeps track of which transactional logs moves into database files. Keep on check the log file
    entering the database in a current order
  • Transactional log – eoo.log file which write the current transactions into transactional logs. If it reaches 1 MB, it will
    rename the log file into E00000001.log
  • Temp.EDB – Temporary database file, which will process the transactional logs that are to be to write in .EDB Database
    file
  • JRS – Reserved Log files – if the size of the disk is full and you can’t write any mails as transactional logs these files will
    help into action
  1. What you mean by database portability

Database portability is a feature that enables a Microsoft Exchange Server 2013 mailbox database to be moved to or mounted on any other Mailbox server in the same organization running Exchange 2013 that has databases with the same database schema version. Mailbox databases from previous versions of Exchange can't be moved to a Mailbox server running Exchange 2013. By using database portability, reliability is improved by removing several error-prone,manual steps from the recovery processes. In addition, database portability reduces the overall recovery times for various failure scenarios.

  1. Explain the mail flow in Exchange server 2013

The below diagram provides more detail on the mail flow in Exchange server 2013.
Microsoft Exchange Interview Questions

  1. What is S/MIME certificate and how to send email using S/MIME certificate
  • S/MIME (Secure/Multipurpose Internet Mail Extensions)used for users to encrypt outgoing messages and attachments
    so that only intended recipients who have a digital identification (ID), also known as a certificate, can read them. With
    S/MIME, users can digitally sign a message, which provides the recipients with a way to verify the identity of the sender
    and that the message hasn't been tampered with.
  • Setting up S/MIME for Outlook Web App needs Exchange 2013 SP1 which can be configured using Powershell command
    Get-SmimeConfig and Set-SmimeConfig
  1. How Activesync works in Exchange Server 2013
  • Microsoft ActiveSync provides for synchronized access to email from a handheld device, such as a Pocket PC or other
    Windows Mobile device. It allows for real-time send and receives functionality to and from the handheld, through the
    use of push technology.
  • A mobile device that's configured to synchronize with an Exchange 2013 server issues an HTTPS request to the server.
    This request is known as a PING. The request tells the server to notify the device if any items change in the next 15
    minutes in any folder that's configured to synchronize. Otherwise, the server should return an HTTP 200 OK message.
    The mobile device then stands by. The 15-minute time span is known as a heartbeat interval.
  • If no items change in 15 minutes, the server returns a response of HTTP 200 OK. The mobile device receives this
    response, resumes activity (known as waking up), and issues its request again. This restarts the process.
  • If any items change or new items are received within the 15-minute heartbeat interval, the server sends a response that informs the mobile device that there's a new or changed item and provides the name of the folder in which the new or changed item resides. After the mobile device receives this response, it issues a synchronization request for the folder that has the new or changed item. When synchronization is complete, the mobile device issues a new PING request and the whole process starts over.
  1. What is the purpose of retention policy tag
  • Retention tags are used to apply retention settings to folders and individual items such as e-mail messages and voice mail. These settings specify how long a message remains in a mailbox and the action to be taken when the message reaches the specified retention age. When a message reaches its retention age, it's moved to the user’s In-Place Archive
    or deleted.
  • Unlike managed folders (the MRM feature introduced in Exchange Server 2007), retention tags allow users to tag their own mailbox folders and individual items for retention. Users no longer have to file items in managed folders provisioned by an administrator based on message retention requirements.
  1. Difference between proxy and re-direction terminology in Exchange Server 2013
  • Microsoft Client Access server can act as a proxy for other Client Access servers within the organization. This is useful when multiple Client Access servers exist in different Active Directory sites in an organization, and at least one of those sites isn't exposed to the Internet.
  • A Client Access server can also perform redirection for Microsoft Office Outlook Web App URLs and for Exchange ActiveSync devices. Redirection is useful when users connect to a Client Access server that isn't in their local Active Directory site, or if a mailbox has moved between Active Directory sites. It's also useful if users should actually be using a more effective URL. For example, users should be using a URL that's closer to the Active Directory site in which their mailbox resides.
  1. What is the purpose of File Share Witness

A witness server is a server outside a DAG that's used to achieve and maintain quorum when the DAG has an even number of members. DAGs with an odd number of members don't use a witness server. All DAGs with an even number of members must use a witness server. The witness server can be any computer running Windows Server. There is no requirement that the version of the Windows Server operating system of the witness server matches the operating system used by the DAG members.

  1. List out the different type of quorum model used in Exchange server 2013
  • Even - Node and File Share Majority quorum mode Odd - Majority quorum mode
  • DAGs with an even number of members use the failover cluster's Node and File Share Majority quorum mode, which employs an external witness server that acts as a tie-breaker. In this quorum mode, each DAG member gets a vote. In addition, the witness server is used to provide one DAG member with a weighted vote (for example, it gets two votes
    instead of one). The cluster quorum data is stored by default on the system disk of each member of the DAG, and is kept consistent across those disks. However, a copy of the quorum data isn't stored on the witness server. A file on the witness server is used to keep track of which member has the most updated copy of the data, but the witness server
    doesn't have a copy of the cluster quorum data. In this mode, a majority of the voters (the DAG members plus the witness server) must be operational and able to communicate with each other to maintain quorum. If a majority of the voters can't communicate with each other, the DAG's underlying cluster loses quorum, and the DAG will require administrator intervention to become operational again.
  • DAGs with an odd number of members use the failover cluster's Node Majority quorum mode. In this mode, each member gets a vote, and each member's local system disk is used to store the cluster quorum data. If the configuration of the DAG changes, that change is reflected across the different disks. The change is only considered to have been committed and made persistent if that change is made to the disks on half the members (rounding down) plus one. For example, in a five-member DAG, the change must be made on two plus one members, or three members total.
  1. Difference between Primary Active Manager and Standby Active Manager
  • Primary Active Manager which runs inside the Microsoft Exchange Replication Service used to notify and react in case
    of server failure. The PAM owns the cluster quorum resource and holds the information about active, passive and
    mounted databases.
  • Standby Active Manager provides information of the server hosting the active copy of a mailbox database to the Client
    Access or Transport services.
  1. What is the purpose of safety-net and transport dumpster
  • Transport dumpster helps to protect against data loss by maintaining a queue of successfully delivered messages that hadn't replicated to the passive mailbox database copies in the DAG. When a mailbox database or server failure required the promotion of an out-of-date copy of the mailbox database, the messages in the transport dumpster were
    automatically resubmitted to the new active copy of the mailbox database.
  • The transport dumpster has been improved in Exchange 2013 and is now called Safety Net.

Similarity between Safety Net and transport dumpster in Exchange 2010:

  • Safety Net is a queue that's associated with the Transport service on a Mailbox server. This queue stores copies of messages that were successfully processed by the server.
  • You can specify how long Safety Net stores copies of the successfully processed messages before they expire and are automatically deleted. The default is 2 days.

Here's how Safety Net is different in Exchange 2013:

  • Safety Net doesn't require DAGs. For Mailbox servers that don't belong to a DAGs, Safety Net stores copies of the delivered messages on other Mailbox servers in the local Active Directory site.
  • Safety Net itself is now redundant, and is no longer a single point of failure. This introduces the concept of the Primary Safety Net and the Shadow Safety Net. If the Primary Safety Net is unavailable for more than 12 hours, resubmit requests become shadow resubmit requests, and messages are re-delivered from the Shadow Safety Net.
  • Safety Net takes over some responsibility from shadow redundancy in DAG environments. Shadow redundancy doesn't need to keep another copy of the delivered message in a shadow queue while it waits for the delivered message to replicate to the passive copies of mailbox database on the other Mailbox servers in the DAG. The copy of the delivered message is already stored in Safety Net, so the message can be resubmitted from Safety Net if necessary.
    • In Exchange 2013, transport high availability is more than just a best effort for message redundancy. Exchange 2013 attempts to guarantee message redundancy. Because of this, you can't specify a maximum size limit for Safety Net. You can only specify how long Safety Net stores messages before they're automatically deleted.
  1. What is the purpose of crimson log channel in Exchange Server 2013
  • The HighAvailability channel contains events related to startup and shutdown of the Microsoft Exchange Replication service and other components that run within it, such as Active Manager or VSS writer for example. The HighAvailability channel is also used by Active Manager to log events related to Active Manager role monitoring and database action events, such as a database mount operation and log truncation, and to record events related to the DAG's underlying cluster.
  • The MailboxDatabaseFailureItems channel is used to log events associated with any failures that affect a replicated mailbox database.
  1. Difference between accepted domain and remote domain in Exchange Server 2013
  • Remote domains are SMTP domains that are external to your Microsoft Exchange organization. You can create remote domain entries to define the settings for message transferred between your Exchange organization and specific external domains. The settings in the remote domain entry for a specific external domain override the settings in the default remote domain that normally apply to all external recipients. The remote domain settings are global for the Exchange organization
  • An accepted domain is any SMTP namespace for which a Microsoft Exchange Online organization sends or receives email. Accepted domains include those domains for which the Exchange organization is authoritative. An Exchange organization is authoritative when it handles mail delivery for recipients in the accepted domain. Accepted domains also include domains for which the Exchange organization receives mail and then relays it to an email server that's outside the organization for delivery to the recipient.
  1. What are the High Availability features introduce in Exchange Server 2010?
  • Mailbox resiliency – unified high availability and site resiliency
  • Database Availability Group – a group of up to 16 Mailbox servers that holds the set of replicated databases
  • Mailbox database copy – a mailbox database (.edb files and log file) that is either active or passive copy of the mailbox database
  • Database Mobility – the ability of a single mailbox database to be replicated to and mounted on other mailbox servers
  • RPC Client Access Service – a Client Access Server feature that provides a MAPI endpoint for outlook clients
  • Shadow redundancy – a transport feature that provides redundancy for messages for the entire time they are in transit
  • Incremental deployment – the ability to deploy high availability or site resilience after the exchange is installed
  • Exchange third party replication API – an exchange provided API that enables use of third party replication for DAG
  1. What is Exchange Control Panel?
    ECP it’s a new and simplified web based management console and it’s a browser based management client for end user, administrators and specialist, ECP can be accessible via URL, browsers and outlook 2010, ECP deployed as part of the client access server role, Simplified user administration for management tasks and it’s RBAC aware.

  2. Who can use ECP and what are the manageable options?

  • Specialist and administrators – administrator can delegate to specialist e.g. help desk operators – Change user name password etc., department administrator – change OU and e-discovery administrators – legal department.
  • End users – comprehensive self-service tools for end users – fetch phone number, changing name and create groups.
  • Hosted customers – tenant administrators and tenant end users.
  1. What is federated sharing?
    Federated Sharing allows easy sharing of availability information, calendar, and contacts with recipients in external federated organizations

  2. What are the options shared in federated sharing?
    - Free busy information
    - Calendar and contact sharing
    - Sharing policy

  3. What is Microsoft Federation Gateway?

Exchange Server 2010 uses Microsoft Federation Gateway (MFG), an identity service that runs in the cloud, as the trust broker. Exchange organizations wanting to use Federation establish a Federation Trust with MFG, allowing it to become a federation partner to the Exchange organization. The trust allows users authenticated by Active Directory , known as the identity provider (IP), to be issued Security Assertion Markup Language (SAML) delegation tokens by MFG. The delegation tokens allow users from one federated organization to be trusted by another federated organization. With MFG acting as the trust broker, organizations are not required to establish multiple individual trust relationships with other organizations. Users can access external resources using a single sign-on (SSO) experience

  1. What is Federation Trust?

A Federation Trust is established between an Exchange organization and MFG by exchanging the organization’s certificate with MFG, and retrieving MFG’s certificate and federation metadata. The certificate is used for encrypting tokens

  1. What is Sharing Policy?

Sharing policies allow you to control how users in your organization can share calendar and contact information with users outside the organization. To provision recipients to use a particular sharing policy

  1. Why Archive?
  • Growing E-Mail Volume – everyone wants to have more E-mail because of this the storage, Backup disk should be increases
  • Performance and storage issue – increase in Storage costs
  • Mailbox quota – users are forced to manage quota
  • PSTs – quota management often results in growing PSTs – outlook Auto Archive
  • Discovery and Compliance issues – PSTs difficult to discovery centrally, regulatory retention schedules contribute to further volume/storage issues
  1. What are the archiving options introduced in Exchange Server 2010?
  • Personal Archive – secondary Mailbox Node, they are the PST files of primary Mailbox
  • Retention Policies – folder/item level and archive/delete policies
  • Multi-Mailbox search – Role based GUI, admin can assign this permission to legal team
  • Legal Hold – monitor or control a user from delete a mail by legal hold and searchable with Multi Mailbox Search
  • Journaling – Journal de-duplication (unwanted journaling on distributed mails). One copy of journal per database and
  • Journal decryption – HT role will do the decryption and send the decrypted copy for journaling
  1. What are the Retention Policies in Exchange Server 2010?
  • Move Policy – automatically moves messages to the messages to the archive Mailbox with the options of 6 months, 1 year, 2 years, 5 years and never – 2 years is default. Move mailbox policies helps keep mailbox under quota. This works like outlook Auto Archive without creating PSTs
  • Delete Policy – automatically deletes messages. Delete policies are global. Removes unwanted items
  • Move + Delete policy – automatically moves messages to archive after X months and deletes from archive after Y Months. We can set policy priority: Explicit policies over default policies; longer policies apply over shorted policies
  1. What is journaling and what are the journaling features in Exchange Server 2010?
    Journaling is an option to track mails from particular user or from a group of users. The New Features in Journaling for Exchange server 2010 are
  • Transport Journaling – ability to journal individual Mailboxes or SMTP address and also this gives a detailed report per To/Cc//Bcc/Alt-Recipient and DL expansion
  • Journal report de duplication – reduces duplication of journal reports. Exchange server 2010 creates one report per message.
  1. What are the different Exchange Recipient types?
  • User mailbox: This mailbox is created for an individual user to store mails, calendar items, contacts, tasks, documents, and other business data.
  • Linked mailbox: This mailbox is created for an individual user in a separate, trusted forest. For example AD account is created in A.COM and Mailbox is created in B.COM Exchange Server.
  • Shared mailbox: This mailbox is not primarily associated with a single user and is generally configured to allow logon access for multiple users.
  • Legacy mailbox: This mailbox is resides on a server running Exchange Server 2003 or Exchange 2000 Server.
  • Room mailbox: This mailbox is created for a meeting location, such as a meeting or conference room, auditorium, or training room. When we create this mailbox, by default a disabled user object account is created.
  • Equipment mailbox: A resource mailbox is created for a non-location specific resource, such as a portable computer projector, microphone, or a company car. When we create this mailbox, by default a disabled user object account is created. Equipment mailboxes provide a simple and efficient way for users to use resources in manageable way.
  1. What is a Smart Host? Where would you configure it?

A smart host is a type of mail relay server which allows an SMTP server to route e-mail to an intermediate mail server rather than directly to the recipient’s server.
Often this smart host requires authentication from the sender to verify that the sender has privileges to have mail forwarded through the smart host. This is an important distinction from an open relay that will forward mail from the sender without authentication. Common authentication techniques include SMTP-AUTH and POP.

Smart host is used for the following purposes:

  • Used for backup mail (secondary MX) services.
  • Used in spam control efforts.
  1. What are the new features introduced in Exchange Server 2010 on overview perspective?
  • Protection and compliance
    • Email Archiving
    • Protect Communication
    • Advanced Security
  • Anywhere Access
    • Manage Inbox Overload
    • Enhanced Voice Mail
    • Collaborate efficiently
  • Flexible and reliable
    • Continuous Availability
    • Simplified Administration
    • Flexible deployment of Exchange Server 2010
  1. What’s New in Exchange Server 2010 in Client Access Server Level?

Client Access Server level improvements in Exchange Server 2010 are
Federation certificates, Exchange ActiveSync, SMS Sync, Integrated Rights Management, Microsoft Office Outlook Web App, and virtual directories.

  • Federation certificates can be a self signed certificate instead of a certificate issued by a CA to establish federation trust.

  • Exchange Active sync devices can be managed using Exchange Control Panel like manage default access level for all phones, set up and email alert when a device is quarantined and create and manage active sync device access rules

  • SMS sync is a new feature is exchange active sync that works with Windows mobile 6.1 with outlook mobile update and windows mobile 6.5, it will give an ability to synchronize messages between a mobile phone or a device and exchange 2010 inbox

  • New outlook feature like** OWA themes** and an option to customize the themes. User will have an option to reset the expired password from OWA

  • Reset OWA Virtual directory wizard will resolve the damaged file on a virtual directory

  • Client throttling policies will help you manage performance of your Client Access servers. Only the policies to limit the number of concurrent client connections were enabled by default. Exchange 2010 SP1 all client throttling policies are enabled by default.

  1. What are the new Transport Server level features in Exchange Server 2010?

Below are the new Transport functionality

  • MailTips access control over organizational relationships
  • Enhanced monitoring and troubleshooting features for MailTips and Message Tracking
  • Message throttling enhancements
  • Shadow redundancy promotion
  • SMTP failover and load balancing improvements
  • Support for extended protection on SMTP connections
  • Send connector changes to reduce NDRs over well-defined connections

Reference

]]>
<![CDATA[how to migrate from one ubuntu machine to another]]>

Step 1: Gather details on packages installed on the source machine

To get the list of all packages installed on the machine. The following command can be used. This one line script will get all the installed package and store them into a file.

sudo dpkg --get-selections | sed "s/
]]>
https://labbots.com/how-to-migrate-from-one-ubuntu-machine-to-another/5f368ff99d6dfc0e9c363ed4Sun, 24 Apr 2016 19:56:30 GMT

Step 1: Gather details on packages installed on the source machine

how to migrate from one ubuntu machine to another

To get the list of all packages installed on the machine. The following command can be used. This one line script will get all the installed package and store them into a file.

sudo dpkg --get-selections | sed "s/.*deinstall//" | sed "s/install$//g" | sed  "s/ *$//" | sort -u > ~/packagelist

To get the list of packages that are manually installed, the following command can be used.

comm -23 <(aptitude search '~i !~M' -F '%p' | sed  "s/ *$//" | sort -u) <(gzip -dc /var/log/installer/initial-status.gz | sed -n 's/^Package: //p'  |sort -u) |  sed -e '/^linux\-headers\-*/d' -e '/^linux\-image\-*/d' > ~/manualpackagelist

The above commands works by

  1. Get the list of packages that are not installed as dependency using aptitude.
  2. Get the list of packages installed right after a fresh install which can be read from initial-status.gz log.
  3. Compare the two lists to identify the packages that are manually installed by comparing and extracting lines from the result of aptitude command.

Alternative command for getting manually installed packages using apt-mark

comm -23 <(apt-mark showmanual | sort -u) <(gzip -dc /var/log/installer/initial-status.gz | sed -n 's/^Package: //p' | sort -u) > ~/manualpackagelist

Step 2: Copy configurations, home folder and package repositories.

  • Copy the package manually added repository source list. The source list is present under /etc/apt/source.list.d in Ubuntu. The source list can either be manually copied or use y-ppa-manager to backup and import source list. Detailed installation instruction can be found at [ppa manager installation].
  • Copy all the application specific configuration. Usually these configurations are in hidden directories under user home directory.
  • Copy the ssh keys which would be under /home/user/.ssh directory.
  • Make sure you copy any customized configuration under /etc like apache config or fail2ban config etc.
  • Export the repository key.
sudo apt-key exportall > ~/Repository.keys

The key can be imported back into the system using

sudo apt-key add ~/Repository.keys

Step 3: Installing packages on target machine.

The package list file must be copied across to the target machine using rsync or scp and then use the apt-get command to install the packages.

sudo aptitude update && cat pkglist | xargs sudo aptitude install -y

To install packages from the package list file and to verify package whether it is already installed before installing, the following script can be used.

Reference

]]>
<![CDATA[Split and Merge file using dd command]]>https://labbots.com/split-and-merge-file-using-dd-command/5f368ff99d6dfc0e9c363ed3Wed, 13 Apr 2016 21:24:11 GMTSplit and Merge file using dd command

dd command is one of the powerful utility in Linux and should be in every hacker's arsenal. dd command is used to manipulate binary files directly. dd is the tool used to write disk headers, boot records. The command is mostly used to copy / clone / backup / restore entire hard disk or partitions. Any Wrong usage of the command can lead to destroying of file or entire hard disk as it deals directly with the binary data. The command is widely misused to quickly trashing files or partitions.

I was curious to find out how dd command can be used to split and merge files. I wanted to split a given file to fixed size chunk and then on the other end try to put the files back together to create the original file. So here is bash script I came up with to achieve this.

Implementation

Usage

The above script can be used to split and merge any type of files.

Splitting files

The files can be split into chunks by passing -b or --split flag to the script. following is the example usage for splitting files

 ./split_merge_file.sh --split --source [path_to_file] --destination [destination_path] --prefix split

The prefix argument is used to specify a prefix that is added to the split files. This prefix is later used in merge command to fetch all the parts of the file.

Merging files

Merge the file parts back to single file using --merge flag in the script.

./split_merge_file.sh --merge --source [path_to_split_files] --destination [destination_directory] --prefix split

Reference

]]>
<![CDATA[Open VPN config file split and merge]]>

I am a big fan of OpenVPN and I use OpenVPN for both work and personal purposes. Using VPN is an integral part for me as I rely on it to manage my VPC at work and also to manage my privacy online. I use OpenVPN network manager to connect

]]>
https://labbots.com/open-vpn-config-file-split-and-merge-bash/5f368ff99d6dfc0e9c363ed2Tue, 12 Apr 2016 20:12:43 GMTOpen VPN config file split and merge

I am a big fan of OpenVPN and I use OpenVPN for both work and personal purposes. Using VPN is an integral part for me as I rely on it to manage my VPC at work and also to manage my privacy online. I use OpenVPN network manager to connect with the VPN in my Ubuntu machine. Unfortunately OpenVPN network manager does not seem to be happy if the certificates and key files are inline in the configuration file. So All those certificates needs to be separated into individual files and linked to the config file. I got fed up with creating these configuration files for all the users. So as any decent hacker would do, I decided to write a bash script to split or merge back a openVPN config file.

Dependencies

The script has a very minimum dependency and all the commands used are available in most of the Linux distributions.

  • sed (Stream editor)
  • grep
  • getopt

Usage

The following script can be used to merge or split a OpenVPN configuration file.

Splitting Config

To specify the script that a split operation to be performed, -p flag is set. Source is a required argument through which the path to the OpenVPN configuration file is specified.


 $ ./ovpn_config_merge_split.sh -p --source [path_to_config] --destination [destination_path]
 
Merging Config

To merge the config file back to one file with all the certificates and keys inline. You can pass -m flag to the script which will set the script to merge mode. The script can automatically try to detect the certificates and keys from the config file if the path are specified in the OpenVPN config file else the certificates and keys can be passed as argument to the script.

./ovpn_config_merge_split.sh -m=auto --source [path_to_config] --destination [destination_path]

or to manually specify all the certs and keys

./ovpn_config_merge_split.sh -m --source [path_to_config] --destination [destination_path] --ca [filepath] --cert [filepath] --key [filepath] --tls-auth [filepath]

Implementation

]]>
<![CDATA[Delete stale files by last accessed time and created time (Bash)]]>

I implemented file based caching on one of my projects to cache large number of responses which are CPU and memory intensive. The file caching worked as a charm at start but soon I started to hit the storage capacity of the web server. So I wanted a script that

]]>
https://labbots.com/delete-stale-files-by-access-time-bash/5f368ff99d6dfc0e9c363ed1Sun, 10 Apr 2016 20:32:58 GMTDelete stale files by last accessed time and created time (Bash)

I implemented file based caching on one of my projects to cache large number of responses which are CPU and memory intensive. The file caching worked as a charm at start but soon I started to hit the storage capacity of the web server. So I wanted a script that can be configured in cron job which then can checks the configured directory in regular intervals to see whether the directory has reached a defined threshold. If so then delete stale files that are no more in use.

My algorithm flow:

  • Check whether the directory reached threshold.
  • If reached then start deleting files that are not accessed in last 30 days.
  • If still the threshold is not reached then decrement access time by 10 days and proceed deleting those files. ( This recursive loop goes on till the access time reaches a configured minimum limit. I don't want to delete all the caches files that are recently being accessed).
  • If still the threshold is not reached then start deleting the oldest created files starting from 30 days and decrementing by 10 days till a pre configured minimum limit is reached.
  • If still the threshold is not reached then email a notification to a configured email address.

Implementation

The following script is designed to be run either manually or can be configured as a cron job to monitor a specific directory. The script takes 2 arguments.

  • Directory path.
  • Threshold limit ( in MB)

Usage:

  $./file_cache_clean.sh [directory_path] [threshold_limit_in_mb]

Optional Step: Set up your crontab

To make this run every 12 hours, I added this to my crontab (using crontab -e):

0 0,12 * * * /home/user/scripts/file_cache_clean.sh /var/files/cache 1000
]]>
<![CDATA[Bash Cheat Sheet - File operation]]>https://labbots.com/bash-file-operation-cheat-sheet/5f368ff99d6dfc0e9c363ecfThu, 07 Apr 2016 22:29:57 GMT

Get filename with file extension

FULLFILENAME=$(basename $FILE)

Get filename without file extension

FILENAME="${FILE%.*}"

Get extension from filename

EXTENSION="${FILE##*.}"

Get MIME type of the given file

MIME_TYPE="$(file --brief --mime-type "$FILE")"

Get current directory of the script

Bash Cheat Sheet - File operation

Get the full directory name of the script no matter where it is being called from. Uses Bash variable BASH_SOURCE. reference source

DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"

Get the name of the script

Simple way to get the script name

PROGRAMNAME=${0##*/}

Advanced method to get the script name. The below code will also try to resolve symbolic link.

PROGRAMNAME="$(basename "$(test -L "$0" && readlink "$0" || echo "$0")")"
PROGRAMNAME=${PROGRAMNAME%.*}

Get the directory path for a given filepath

The below code will resolve relative path or symlink into absolute directory path for a given filepath.

DIR="$(dirname "$(readlink -f $FILE)")"

Preserve file timestamp from original file

filemodtime=$(stat -c%y "$FILE" | sed 's/[ ]\+/ /g')
touch -m -d "$filemodtime" "$NEWFILE"

Find and update access time

Find all files in folder and update the access time to specified time.

find /var/files/ -type f -atime +1 -exec touch -a --date="2016-03-10" {} \;

Changing a File "Access" and "Modification" Time

Change a file's access time (atime) :
touch -a --date="2016-03-10" file.txt
touch -a --date="2016-03-10 02:00" file.txt
touch -a --date="2016-03-10 02:00:22.432346231 +0421" file.txt
Change a file's modification time (mtime) :
touch -m --date="2025-03-12" file.txt
touch -m --date="2025-03-12 21:04" file.txt
touch -m --date="2025-03-12 21:04:22.432346231 +0421" file.txt

Update All time of a file to specific time

The below command copies the current time to a variable. then update the system date to any specified time and then update the timestamp of the file and revert back system time to default.

NOW=$(date) && date -s "2030-08-15 21:30:11" && touch file.txt && date -s "$NOW" && unset NOW

Remove Whitespace from File names

This command will find all the files from the directory where the command is run and removes spaces in file names.

find $1 -name "* *" -type f -print0 |   while read -d $'\0' f; do mv -fv "$f" "${f// /}"; done

Reference

]]>
<![CDATA[Google Drive upload bash script]]>https://labbots.com/google-drive-upload-bash-script/5f368ff99d6dfc0e9c363eceSun, 03 Apr 2016 18:55:41 GMTGoogle Drive upload bash script

I was looking for a simple command line script to upload files to Google drive and I stumbled upon gdrive a command line utility to manage files in google drive. But it was heavy weight for my requirements and had dependencies which I was not able to install in the servers. So I decided on writing one myself which catered my needs.

Google offers quite a number of REST APIs to integrate with Google Drive and its really simple to use them. In this following post I use Google v2 APIs to upload files / folders to Google drive. The complete script is available to download in github (https://github.com/labbots/google-drive-upload)

Dependencies

My intention is to write a script with minimum dependencies and this script does not have very many dependencies. Most of the dependencies are available by default in most Linux platforms. The script requires the following packages

  • curl
  • sed (Stream editor)
  • find command
  • awk
  • getopt

Create Google API key

Accessing Google API requires authentication credentials which can be created using developers Google Console. Make sure Google Drive API is enabled for the project created in the Google Console. This API key (client id and client secret) will be used in the script to generate OAuth 2.0 token to access Google Drive APIs.

Bash script

To seamlessly access and manage the Google drive of the user,the script requires Device authorization to access Google drive of the user. The V3 Google APIs have restricted the scope for the device authorization, thereby V3 APIs cannot be used to upload / manage files in Google drive through Device code authorization workflow. So I decided to use the V2 APIs to achieve my requirements (I know older versions will be deprecated and not a good idea to develop against, but this is a weekend project and it works :P )

The idea is to have a script that takes filename and foldername as arguments to the script and uploads the file to the specified foldername in google drive. I wanted all other configurations such as API keys and refresh tokens to be stored in a config file which can be setup during the initial execution of the script.

Step 1 : Parsing arguments and options passed to the script.

To achieve this I used getopt library available in all linux distributions. This library allows parsing both short and long options. The options can be parsed as shown below.


PROGNAME=${0##*/}
SHORTOPTS="vhr:C:z:" 
LONGOPTS="verbose,help,create-dir:,root-dir:,config:" 

set -o errexit -o noclobber -o pipefail -o nounset 
OPTS=$(getopt -s bash --options $SHORTOPTS --longoptions $LONGOPTS --name $PROGNAME -- "$@" ) 

eval set -- "$OPTS"

VERBOSE=false
HELP=false
CONFIG=""
ROOTDIR=""

while true; do
  case "$1" in
    -v | --verbose ) VERBOSE=true;curl_args="--progress"; shift ;;
    -h | --help )    usage; shift ;;
    -C | --create-dir ) FOLDERNAME="$2"; shift 2 ;;
    -r | --root-dir ) ROOTDIR="$2";ROOT_FOLDER="$2"; shift 2 ;;
    -z | --config ) CONFIG="$2"; shift 2 ;;
    -- ) shift; break ;;
    * )  break ;;
  esac
done

The default config parameters are stored in a config file in the home directory of the user for future use.


if [ -e $HOME/.googledrive.conf ]
then
    . $HOME/.googledrive.conf
fi

old_umask=`umask`
umask 0077

if [ -z "$ROOT_FOLDER" ]
then
    read -p "Root Folder ID (Default: root): " ROOT_FOLDER
    if [ -z "$ROOT_FOLDER" ] || [ `echo $ROOT_FOLDER | tr [:upper:] [:lower:]` = `echo "root" | tr [:upper:] [:lower:]` ]
    	then
    		ROOT_FOLDER="root"
    		echo "ROOT_FOLDER=$ROOT_FOLDER" >> $HOME/.googledrive.conf
    	else
		    if expr "$ROOT_FOLDER" : '^[A-Za-z0-9_]\{28\}$' > /dev/null
		    then
				echo "ROOT_FOLDER=$ROOT_FOLDER" >> $HOME/.googledrive.conf
			else
				echo "Invalid root folder id"
				exit -1
			fi
		fi
fi

if [ -z "$CLIENT_ID" ]
then
    read -p "Client ID: " CLIENT_ID
    echo "CLIENT_ID=$CLIENT_ID" >> $HOME/.googledrive.conf
fi

if [ -z "$CLIENT_SECRET" ]
then
    read -p "Client Secret: " CLIENT_SECRET
    echo "CLIENT_SECRET=$CLIENT_SECRET" >> $HOME/.googledrive.conf
fi

Step 2 : Generate access token.

We require access token to access the APIs. In the script we use device code authorization OAuth workflow to generate access token and refresh token for the user. When the user runs the script for the first time, we want them to authorize the application so we can get the access token from Google for the user.
We are using REST APIs of Google can be called from shell script using curl command and the API responds with json value. So we need a simple parser to retrieve values from the json object in the response. So following function will let us extract the required value from json reponse.


# Method to extract data from json response
function jsonValue() {
KEY=$1
num=$2
awk -F"[,:}][^://]" '{for(i=1;i<=nf;i++){if($i~ \042'$key'\042 ){print $(i+1)}}}' | tr -d '"' sed -n ${num}p -e 's [}]*$ ' ^[[:space:]]* [[:space:]]*$ [,]*$ } < code>

To get the application authorized by the user using device code authorization, we need to make a API call to Google Oauth endpoint as follows


  RESPONSE=`curl --silent "https://accounts.google.com/o/oauth2/device/code" --data "client_id=$CLIENT_ID&scope=$SCOPE"`
	DEVICE_CODE=`echo "$RESPONSE" | jsonValue "device_code"`
	USER_CODE=`echo "$RESPONSE" | jsonValue "user_code"`
	URL=`echo "$RESPONSE" | jsonValue "verification_url"`

	echo -n "Go to $URL and enter $USER_CODE to grant access to this application. Hit enter when done..."
	read

	RESPONSE=`curl --silent "https://accounts.google.com/o/oauth2/token" --data "client_id=$CLIENT_ID&client_secret=$CLIENT_SECRET&code=$DEVICE_CODE&grant_type=http://oauth.net/grant_type/device/1.0"`

	ACCESS_TOKEN=`echo "$RESPONSE" | jsonValue access_token`
	REFRESH_TOKEN=`echo "$RESPONSE" | jsonValue refresh_token`

    echo "REFRESH_TOKEN=$REFRESH_TOKEN" >> $HOME/.googledrive.conf

The resulting access token can be used to access the google drive API and refresh token can be stored in config which can be used to regenerate access token when current access token expires.

Step 3 : Create Directory and Upload file.

Once the access token is generated, the last step is to upload file to the specified directory or to the root directory of the google drive. Google drive operates based on ID and not on names. So the drive can have two folders with same name. But for my use case, I wanted to upload the file to the same directory and not to create directory if the directory already exists. So first I check whether the directory exists, if it exists then I use the folder id or I create a new folder in drive and use that folder id.


function createDirectory(){
	DIRNAME="$1"
	ROOTDIR="$2"
	ACCESS_TOKEN="$3"
	FOLDER_ID=""
    QUERY="mimeType='application/vnd.google-apps.folder' and title='$DIRNAME'"
    QUERY=$(echo $QUERY | sed -f ${DIR}/url_escape.sed)

	SEARCH_RESPONSE=`/usr/bin/curl \
					--silent \
					-XGET \
					-H "Authorization: Bearer ${ACCESS_TOKEN}" \
					 "https://www.googleapis.com/drive/v2/files/${ROOTDIR}/children?orderBy=title&q=${QUERY}&fields=items%2Fid"`

	FOLDER_ID=`echo $SEARCH_RESPONSE | jsonValue id`


	if [ -z "$FOLDER_ID" ]
	then
		CREATE_FOLDER_POST_DATA="{\"mimeType\": \"application/vnd.google-apps.folder\",\"title\": \"$DIRNAME\",\"parents\": [{\"id\": \"$ROOTDIR\"}]}"
		CREATE_FOLDER_RESPONSE=`/usr/bin/curl \
								--silent  \
								-X POST \
								-H "Authorization: Bearer ${ACCESS_TOKEN}" \
								-H "Content-Type: application/json; charset=UTF-8" \
								-d "$CREATE_FOLDER_POST_DATA" \
								"https://www.googleapis.com/drive/v2/files?fields=id"`
		FOLDER_ID=`echo $CREATE_FOLDER_RESPONSE | jsonValue id`

	fi
	echo "$FOLDER_ID"
}

So the final step is to upload the file to the drive into the specified directory. I decided on using resumable upload link so I could resume upload incase of upload failure and not to restart upload from the beginning which might be a inconvenience for larger files.


function uploadFile(){

	FILE="$1"
	FOLDER_ID="$2"
	ACCESS_TOKEN="$3"
	MIME_TYPE=`file --brief --mime-type "$FILE"`
	SLUG=`basename "$FILE"`
	FILESIZE=$(stat -c%s "$FILE")

	# JSON post data to specify the file name and folder under while the file to be created
	postData="{\"mimeType\": \"$MIME_TYPE\",\"title\": \"$SLUG\",\"parents\": [{\"id\": \"$FOLDER_ID\"}]}"
	postDataSize=$(echo $postData | wc -c)

	# Curl command to initiate resumable upload session and grab the location URL
	log "Generating upload link for file $FILE ..."
	uploadlink=`/usr/bin/curl \
				--silent \
				-X POST \
				-H "Host: www.googleapis.com" \
				-H "Authorization: Bearer ${ACCESS_TOKEN}" \
				-H "Content-Type: application/json; charset=UTF-8" \
				-H "X-Upload-Content-Type: $MIME_TYPE" \
				-H "X-Upload-Content-Length: $FILESIZE" \
				-d "$postData" \
				"https://www.googleapis.com/upload/drive/v2/files?uploadType=resumable" \
				--dump-header - | sed -ne s/"Location: "//p | tr -d '\r\n'`

	# Curl command to push the file to google drive.
	# If the file size is large then the content can be split to chunks and uploaded.
	# In that case content range needs to be specified.
	log "Uploading file $FILE to google drive..."
	curl \
	-X PUT \
	-H "Authorization: Bearer ${ACCESS_TOKEN}" \
	-H "Content-Type: $MIME_TYPE" \
	-H "Content-Length: $FILESIZE" \
	-H "Slug: $SLUG" \
	--data-binary "@$FILE" \
	--output /dev/null \
	"$uploadlink" \
	$curl_args
}

The resumable upload of files are yet to be implemented. Hopefully I might cover in following posts.

The complete script is available for download in google drive upload script github repository.

Reference

]]>
<![CDATA[Active Directory Interview Questions]]>https://labbots.com/active-directory-interview-questions/5f368ff99d6dfc0e9c363eccTue, 29 Mar 2016 18:48:58 GMT
Frequently asked interview questions on Active Directory.
Active Directory Interview Questions

This is a compilation of question and answers
on Active Directory from various sources listed below.This provides a starting point in preparation for Windows Administration interview.

  1. Define Active Directory

Active Directory is a database that stores data pertaining to the users and objects within the network. Active Directory allows the compilation of networks that connect with AD, as well as the management and administration.

  1. What is a domain within Active Directory?

A domain represents the group of network resources that includes computers, printers, applications and other resources. Domains share a directory database. The domain is represented by address of the resources within the database. A user can log into a domain to gain access to the resources that are listed as part that domain.

  1. What is the domain controller?

The server that responds to user requests for access to the domain is called the Domain Controller or DC. The Domain Controller allows a user to gain access to the resources within the domain through the use of a single username and password.

  1. Explain what domain trees and forests are

Domains that share common schemas and configurations can be linked to form a contiguous namespace. Domains within the trees are linked together by creating special relationships between the domains based on trust.
Forests consist of a number of domain trees that are linked together within AD, based on various implicit trust relationships. Forests are generally created where a server setup includes a number of root DNS addresses. Trees within the forest do not share a contiguous namespace.

  1. What is LDAP?

LDAP is an acronym for Lightweight Directory Access Protocol and it refers to the protocol used to access, query and modify the data stored within the AD directories. LDAP is an internet standard protocol that runs over TCP/IP.

  1. Mention which is the default protocol used in directory services?

The default protocol used in directory services is LDAP ( Lightweight Directory Access Protocol).

  1. What tool would you use to edit AD?

Adsiedit.msc is a low level editing tool for Active Directory. Adsiedit.msc is a Microsoft Management Console snap-in with a graphical user interface that allows administrators to accomplish simple tasks like adding, editing and deleting objects with a directory service. The Adsiedit.msc uses Application Programming Interfaces to access the Active Directory. Since Adsiedit.msc is a Microsoft Management Console snap-in, it requires access MMC and a connection to an Active Directory environment to function correctly.

  1. How would you manage trust relationships from the command prompt?

Netdom.exe is another program within Active Directory that allows administrators to manage the Active Directory. Netdom.exe is a command line application that allows administrators to manage trust relationship within Active Directory from the command prompt. Netdom.exe allows for batch management of trusts. It allows administrators to join computers to domains. The application also allows administrators to verify trusts and secure Active Directory channels.

  1. Where is the AD database held and how would you create a backup of the database?

The database is stored within the windows NTDS directory. You could create a backup of the database by creating a backup of the System State data using the default NTBACKUP tool provided by windows or by Symantec’s Netbackup. The System State Backup will create a backup of the local registry, the Boot files, the COM+, the NTDS.DIT file as well as the SYSVOL folder.

  1. What is SYSVOL, and why is it important?

SYSVOL is a folder that exists on all domain controllers. It is the repository for all of the active directory files. It stores all the important elements of the Active Directory group policy. The File Replication Service or FRS allows the replication of the SYSVOL folder among domain controllers. Logon scripts and policies are delivered to each domain user via SYSVOL.
SYSVOL stores all of the security related information of the AD.

  1. Briefly explain how Active Directory authentication works

When a user logs into the network, the user provides a username and password. The computer sends this username and password to the KDC which contains the master list of unique long term keys for each user. The KDC creates a session key and a ticket granting ticket. This data is sent to the user’s computer. The user’s computer runs the data through a one-way hashing function that converts the data into the user’s master key, which in turn enables the computer to communicate with the KDC, to access the resources of the domain.

  1. Mention what is the difference between domain admin groups and enterprise admins group in AD?

Enterprise Admin Group

  • Members of this group have complete control of all domains in the forest.
  • By default, this group belongs to the administrators group on all domain controllers in the forest.
  • As such this group has full control of the forest, add users with caution.

Domain Admin Group

  • Members of this group have complete control of the domain
  • By default, this group is a member of the administrators group on all domain controllers, workstations and member servers at the time they are linked to the domain.
  • As such the group has full control in the domain, add users with caution.
  1. Mention what is Kerberos?

Kerberos is an authentication protocol for network. It is built to offer strong authentication for server/client applications by using secret-key cryptography.

  1. Mention what are lingering objects?

Lingering objects can exists if a domain controller does not replicate for an interval of time that is longer than the tombstone lifetime (TSL).

  1. Mention what is TOMBSTONE lifetime?

Tombstone lifetime in an Active Directory determines how long a deleted object is retained in Active Directory. The deleted objects in Active Directory is stored in a special object referred as TOMBSTONE. Usually, windows will use a 60- day tombstone lifetime if time is not set in the forest configuration.

  1. Mention what is PDC emulator and how would one know whether PDC emulator is working or not?

PDC Emulators: There is one PDC emulator per domain, and when there is a failed authentication attempt, it is forwarded to PDC emulator. It acts as a “tie-breaker” and it controls the time sync across the domain.
These are the parameters through which we can know whether PDC emulator is working or not.

  • Time is not syncing
  • User’s accounts are not locked out
  • Windows NT BDCs are not getting updates
  • If pre-windows 2000 computers are unable to change their passwords.
  1. Explain what is Active Directory Schema?

Schema is an active directory component describes all the attributes and objects that the directory service uses to store data.

  1. Explain what is a child DC?

CDC or child DC is a sub domain controller under root domain controller which share name space

  1. Explain what is RID Master?

RID master stands for Relative Identifier for assigning unique IDs to the object created in AD.

  1. Mention what are the components of AD?

Components of AD includes

  • Logical Structure: Trees, Forest, Domains and OU
  • Physical Structures: Domain controller and Sites
  1. Explain what is Infrastructure Master?

Infrastructure Master is accountable for updating information about the user and group and global catalogue.

  1. What is FSMO?

Flexible single master operation is a specialized domain controller (DC) set of tasks, used where standard data transfer and update methods are inadequate. AD normally relies on multiple peer DCs, each with a copy of the AD database, being synchronized by multi-master replication.

  1. Tel me about the FSMO roles?
  • Schema Master
  • Domain Naming Master
  • Infrastructure Master
  • RID Master
  • PDC

Schema Master and Domain Naming Master are forest wide role and only available one on each Forest, Other roles are Domain wide and one for each Domain
AD replication is multi master replication and change can be done in any Domain Controller and will get replicated to others Domain Controllers, except above file roles, this will be flexible single master operations (FSMO), these changes only be done on dedicated Domain Controller so it’s single master replication.

  1. Which FSMO role is the most important? And why?

Interesting question which role is most important out of 5 FSMO roles or if one role fails that will impact the end-user immediately
Most amateur administrators pick the Schema master role, not sure why maybe they though Schema is very critical to run the Active Directory
Correct answer is PDC, now the next question why? Will explain role by role what happens when a FSMO role holder fails to find the answer
Schema Master – Schema Master needed to update the Schema, we don’t update the schema daily right, when will update the Schema? While the time of operating system migration, installing new Exchange version and any other application which requires extending the schema
So if are Schema Master Server is not available, we can’t able to update the schema and no way this will going to affect the Active Directory operation and the end-user
Schema Master needs to be online and ready to make a schema change, we can plan and have more time to bring back the Schema Master Server
Domain Naming Master – Domain Naming Master required to creating a new Domain and creating an application partition, Like Schema Master we don’t create Domain and application partition frequently.
So if are Domain Naming Master Server is not available, we can’t able to create a new Domain and application partition, it may not affect the user, user event didn’t aware Domain Naming Master Server is down
Infrastructure Master – Infrastructure Master updates the cross domain updates, what really updates between Domains? Whenever user login to Domain the TGT has been created with the list of access user got through group membership (user group membership details) it also contain the user membership details from trusted domain, Infrastructure Master keep this information up-to-date, it update reference information every 2 days by comparing its data with the Global Catalog (that’s why we don’t keep Infrastructure Master and GC in same server)
In a single Domain and single Forest environment there is no impact if the Infrastructure Master server is down
In a Multi Domain and Forest environment, there will be impact and we have enough time to fix the issue before it affect the end-user
RID Master –Every DC is initially issued 500 RID’s from RID Master Server. RID’s are used to create a new object on Active Directory, all new objects are created with Security ID (SID) and RID is the last part of a SID. The RID uniquely identifies a security principal relative to the local or domain security authority that issued the SID
When it gets down to 250 (50%) it requests a second pool of RID’s from the RID master. If RID Master Server is not available the RID pools unable to be issued to DC’s and DC’s are only able to create a new object depends on the available RID’s, every DC has anywhere between 250 and 750 RIDs available, so no immediate impact
PDC – PDC required for Time sync, user login, password changes and Trust, now you know why the PDC is important FSMO role holder to get back online, PDC role will impact the end-user immediately and we need to recover ASAP
The PDC emulator Primary Domain Controller for backwards compatibility and it’s responsible for time synchronizing within a domain, also the password master. Any password change is replicated to the PDC emulator ASAP. If a logon request fails due to a bad password the logon request is passed to the PDC emulator to check the password before rejecting the login request.

  1. What is Active Directory Partitions?

Active Directory partition is how and where the AD information logically stored.

  1. What are all the Active Directory Partitions?
  • Schema
  • Configuration
  • Domain
  • Application partition
  1. What is KCC?

KCC (knowledge consistency checker) is used to generate replication topology for inter site replication and for intra-site replication. Within a site replication traffic is done via remote procedure calls over ip, while between sites it is done through either RPC or SMTP.

  1. Explain what intrasite and intersite replication is and how KCC facilitates replication

The replication of DC’s inside a single site is called intrasite replication whilst the replication of DC’s on different sites is called Intersite replication. Intrasite replication occurs frequently while Intersite replication occurs mainly to ensure network bandwidth.

KCC is an acronym for the Knowledge Consistency Checker. The KCC is a process that runs on all of the Domain Controllers. The KCC allows for the replication topology of site replication within sites and between sites. Between sites, replication is done through SMTP or RPC whilst Intersite replication is done using procedure calls over IP.

  1. What is group policy?

Group Policy is one of the most exciting -- and potentially complex -- mechanisms that the Active Directory enables. Group policy allows a bundle of system and user settings (called a "Group Policy Object" or GPO) to be created by an administrator of a domain or OU and have it automatically pushed down to designated systems.

Group Policy can control everything from user interface settings such as screen background images to deep control settings in the client such as its TCP/IP configuration and authentication settings. There are currently over 500 controllable settings. Microsoft has provided some templates as well to provide a starting point for creating policy objects.

A significant advantage of group policy over the old NT-style policies is that the changes they make are reversed when the policy no longer applies to a system. In NT 4, once a policy was applied to a system, removing that policy did not by itself roll back the settings that it imposed on the client. With Windows 2000, when a specified policy no longer applies to a system it will revert to its previous state without administrative interference.

Multiple policies from different sources can be applied to the same object. For example, a domain might have one or more domain-wide policies that apply to all systems in the domain. Below that, systems in an OU can also have policy objects applied to it, and the OU can even be further divided into sub-OU's with their own policies.

This can create a very complex web of settings so administrators must be very careful when creating these multiple layers of policy to make sure the end result -- which is the union of all of the applicable policies with the "closest" policy taking priority in most cases -- is correct for that system. In addition, because Group policy is checked and applied during the system boot process for machine settings and again during logon for user settings, it is recommended that GPO's be applied to a computer from no more than five "layers" in the AD to keep reboot and/or login times from becoming unacceptably long.

  1. Why do we need Netlogon?

Maintains a secure channel between this computer and the domain controller for authenticating users and services. If this service is stopped, the computer may not authenticate users and services, and the domain controller cannot register DNS records.

  1. What are the Groups types available in active directory ?

Security groups: Use Security groups for granting permissions to gain access to resources. Sending an e-mail message to a group sends the message to all members of the group. Therefore security groups share the capabilities of distribution groups.

Distribution groups: Distribution groups are used for sending e-mail messages to groups of users. You cannot grant permissions to security groups. Even though security groups have all the capabilities of distribution groups, distribution groups still requires, because some applications can only read distribution groups.

  1. Explain about the groups scope in AD?

Domain Local Group: Use this scope to grant permissions to domain resources that are located in the same domain in which you created the domain local group. Domain local groups can exist in all mixed, native and interim functional level of domains and forests. Domain local group memberships are not limited as you can add members as user accounts, universal and global groups from any domain. Just to remember, nesting cannot be done in domain local group. A domain local group will not be a member of another Domain Local or any other groups in the same domain.

Global Group: Users with similar function can be grouped under global scope and can be given permission to access a resource (like a printer or shared folder and files) available in local or another domain in same forest. To say in simple words, Global groups can be use to grant permissions to gain access to resources which are located in any domain but in a single forest as their memberships are limited. User accounts and global groups can be added only from the domain in which global group is created. Nesting is possible in Global groups within other groups as you can add a global group into another global group from any domain. Finally to provide permission to domain specific resources (like printers and published folder), they can be members of a Domain Local group. Global groups exist in all mixed, native and interim functional level of domains and forests.

Universal Group Scope: These groups are precisely used for email distribution and can be granted access to resources in all trusted domain as these groups can only be used as a security principal (security group type) in a windows 2000 native or windows server 2003 domain functional level domain. Universal group memberships are not limited like global groups. All domain user accounts and groups can be a member of universal group. Universal groups can be nested under a global or Domain Local group in any domain.

  1. What is REPLMON?

The Microsoft definition of the Replmon tool is as follows; This GUI tool enables administrators to view the low-level status of Active Directory replication, force synchronization between domain controllers, view the topology in a graphical format, and monitor the status and performance of domain controller replication.

  1. What is NETDOM ?

NETDOM is a command-line tool that allows management of Windows domains and trust relationships. It is used for batch management of trusts, joining computers to domains, verifying trusts, and secure channels.

  1. Explain about Trust in AD ?

To allow users in one domain to access resources in another, Active Directory uses trusts. Trusts inside a forest are automatically created when domains are created.
The forest sets the default boundaries of trust, not the domain, and implicit, transitive trust is automatic for all domains within a forest. As well as two-way transitive trust, AD trusts can be a shortcut (joins two domains in different trees, transitive, one- or two-way), forest (transitive, one- or two-way), realm (transitive or nontransitive, one- or two-way), or external (nontransitive, one- or two-way) in order to connect to other forests or non-AD domains.

  1. Different modes of AD restore ?

A nonauthoritative restore is the default method for restoring Active Directory. To perform a nonauthoritative restore, you must be able to start the domain controller in Directory Services Restore Mode. After you restore the domain controller from backup, replication partners use the standard replication protocols to update Active Directory and associated information on the restored domain controller.

An authoritative restore brings a domain or a container back to the state it was in at the time of backup and overwrites all changes made since the backup. If you do not want to replicate the changes that have been made subsequent to the last backup operation, you must perform an authoritative restore. In this one needs to stop the inbound replication first before performing the An authoritative restore.

  1. What is OU ?

Organization Unit is a container object in which you can keep objects such as user accounts, groups, computer, printer . applications and other (OU).
In organization unit you can assign specific permission to the user’s. organization unit can also be used to create departmental limitation.

  1. What is Global Catalog?

The Global Catalog authenticates network user logons and fields inquiries about objects across a forest or tree. Every domain has at least one GC that is hosted on a domain controller. In Windows 2000, there was typically one GC on every site in order to prevent user logon failures across the network.

  1. When should you create a forest?

Organizations that operate on radically different bases may require separate trees with distinct namespaces. Unique trade or brand names often give rise to separate DNS identities. Organizations merge or are acquired and naming continuity is desired. Organizations form partnerships and joint ventures. While access to common resources is desired, a separately defined tree can enforce more direct administrative and security restrictions.

  1. What is group nesting?

Adding one group as a member of another group is called ‘group nesting’. This will help for easy administration and reduced replication traffic.

  1. How the AD authentication works ?

When a user enters a user name and password, the computer sends the user name to the Key Distribution Centre (KDC). The KDC contains a master database of unique long term keys for every principal in its realm. The KDC looks up the user’s master key (KA), which is based on the user’s password. The KDC then creates two items: a session key (SA) to share with the user and a Ticket-Granting Ticket (TGT). The TGT includes a second copy of the SA, the user name, and an expiration time. The KDC encrypts this ticket by using its own master key (KKDC), which only the KDC knows. The client computer receives the information from the KDC and runs the user’s password through a one-way hashing function, which converts the password into the user’s KA. The client computer now has a session key and a TGT so that it can securely communicate with the KDC. The client is now authenticated to the domain and is ready to access other resources in the domain by using the Kerberos protocol.

  1. What is Global Catalog and its function?

The global catalog is a distributed data repository that contains a searchable, partial representation of every object in every domain in a multidomain Active Directory Domain Services (AD DS) forest. The global catalog is stored on domain controllers that have been designated as global catalog servers and is distributed through multimaster replication. Searches that are directed to the global catalog are faster because they do not involve referrals to different domain controllers.

The global catalog provides the ability to locate objects from any domain without having to know the domain name. A global catalog server is a domain controller that, in addition to its full, writable domain directory partition replica, also stores a partial, read-only replica of all other domain directory partitions in the forest.

Forest-wide searches. The global catalog provides a resource for searching an AD DS forest. Forest-wide searches are identified by the LDAP port that they use. If the search query uses port 3268, the query is sent to a global catalog server.
User logon. In a forest that has more than one domain, two conditions require the global catalog during user authentication: Universal Group Membership Caching: In a forest that has more than one domain, in sites that have domain users but no global catalog server, Universal Group Membership Caching can be used to enable caching of logon credentials so that the global catalog does not have to be contacted for subsequent user logons. This feature eliminates the need to retrieve universal group memberships across a WAN link from a global catalog server in a different site.

  • In a domain that operates at the Windows 2000 native domain functional level or higher, domain controllers must request universal group membership enumeration from a global catalog server.
  • When a user principal name (UPN) is used at logon and the forest has more than one domain, a global catalog server is required to resolve the name.

Exchange Address Book lookups. Servers running Microsoft Exchange Server rely on access to the global catalog for address information. Users use global catalog servers to access the global address list (GAL).

  1. What are the physical components of Active Directory?

Domain controllers and Sites. Domain controllers are physical computers which is running Windows Server operating system and Active Directory data base. Sites are a network segment based on geographical location and which contains multiple domain controllers in each site.

  1. What are the logical components of Active Directory?

Domains, Organizational Units, trees and forests are logical components of Active Directory.

  1. What is RODC? Why do we configure RODC?

Read only domain controller (RODC) is a feature of Windows Server 2008 Operating System. RODC is a read only copy of Active Directory database and it can be deployed in a remote branch office where physical security cannot be guaranteed. RODC provides more improved security and faster log on time for the branch office.

  1. What is role seizure? Who do we perform role seizure?

Role seizure is the action of assigning an operations master role to a new domain controller without the support of the existing role holder (generally because it is offline due to a hardware failure). During role seizure, a new domain controller assumes the operations master role without communicating with the existing role holder. Role seizure can be done using repadmin.exe and Ntdsutil.exe commands.

  1. Tell me few uses of NTDSUTIL commands?

We can use ntdsutil commands to perform database maintenance of AD DS, manage and control single master operations, Active Directory Backup restoration and remove metadata left behind by domain controllers that were removed from the network without being properly uninstalled.

  1. A user is unable to log into his desktop which is connected to a domain. What are the troubleshooting steps you will consider?

Check the network connection on the desktop. Try to ping to the domain controller. Run and check if name resolution is working. Check Active Directory for the computer account of the desktop. Compare the time settings on the desktop and Domain controller. Remove the desktop from domain and rejoin to domain.

  1. A Domain Controller called ABC is failing replication with XYZ. How do you troubleshoot the issue?

Active Directory replication issue can occur due to variety of reasons. For example, DNS issue, network problems, security issues etc. Troubleshooting can start by verifying DNS records. Then remove and recreate Domain Controller replication link. Check the time settings on both replication partners.

  1. What do you understand by Garbage Collection? Explain.

Garbage collection is a process of Active Directory. This process starts by removing the remains of previously deleted objects from the database. These objects are known as tombstones. Then, the garbage collection process deletes unnecessary log files. And the process starts a defragmentation thread to claim additional free space. The garbage collection process is running on all the domain controllers in an interval of 12 hours.


Reference

]]>