Keyboard shortcuts

Press โ† or โ†’ to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Home

Welcome to My Learning Journal

Hey, Iโ€™m Akib ๐Ÿ‘‹ โ€” a Full Stack Developer & DevOps Engineer.
This is my personal collection of notes, guides, and cheat sheets on various topics in technology.
I use it both as a quick reference and as a way to share knowledge with others.

ห—หห‹โ˜•หŽหŠห— โ€ƒ Connect with me: Facebook LinkedIn X

๐Ÿ“š What You'll Find Here

This site is automatically built and deployed from my
NixOS configuration repository using Nix and mdBook.

  • ๐Ÿง Linux โ†’ Installation guides, system tools, and server configs.
  • ๐Ÿ’พ Databases โ†’ Notes on MySQL, Postgres, and more.
  • ๐Ÿš€ Deployment โ†’ Step-by-step guides on deploying applications.
  • ๐Ÿ› ๏ธ Dev Tools โ†’ Shell scripts, Git, automation tricks, and configs.

โœจ Happy learning!
Itโ€™s Sat Jan 24 19:20:31 UTC 2026 โ€” a great day to document something new.

Docker

Core Concepts

  • Image: A lightweight, standalone, executable package. Itโ€™s a blueprint that includes everything needed to run an application: code, runtime, system tools, and libraries.
  • Container: A running instance of an image. Itโ€™s the actual, isolated environment where your application runs. You can create, start, stop, and delete multiple containers from a single image.
  • Docker Hub: A public registry (like GitHub for code) where you can find, share, and store container images.
  • Dockerfile: A text file with instructions for building a Docker image.

Image Management (Build, Pull, List)

Commands for building, downloading, and managing your local images.

CommandDescription
docker build -t <name:tag> .Build an image from a Dockerfile in the current directory (.). The -t flag tags it with a human-readable name and tag (e.g., -t my-app:latest).
docker build --no-cache ...Build an image without using the cache. Use this to force a fresh build from scratch.
docker pull <image_name>Download (pull) an image from a registry like Docker Hub (e.g., docker pull postgres).
docker imagesList all images stored locally on your machine.
docker rmi <image_name>Remove (delete) a local image. You may need to stop/remove containers using it first.
docker search <term>Search Docker Hub for images matching a search term.

Container Lifecycle (Run, Stop, Interact)

Commands for creating, running, and managing your containers.

CommandDescription
docker run <image_name>Create and start a new container from an image.
docker run -d <image_name>Run in detached mode (in the background). The terminal will be freed up.
docker run --name <my-name> ...Give your container a custom name (e.g., my-db-container).
docker run -p 8080:80 ...Map a port from your local machine (host) to the container. This example maps host port 8080 to container port 80.
docker run -v /path/on/host:/path/in/container ...Mount a volume to persist data. This links a host directory to a container directory.
docker run --rm ...Automatically remove the container when it stops. Excellent for temporary tasks and cleanup.
docker run -it <image_name> shRun in interactive mode (-it). This opens a shell (sh or bash) inside the new container.
docker exec -it <container_name> shExecute a command (like sh) inside an already running container.
docker start <container_name>Start a stopped container.
docker stop <container_name>Stop a running container gracefully.
docker kill <container_name>Force-stop a running container immediately.
docker rm <container_name>Remove a stopped container.
docker rm -f <container_name>Force-remove a container (even if itโ€™s running).

Inspection & Logs

Commands for checking the status, logs, and details of your containers.

CommandDescription
docker psList all running containers.
docker ps -aList all containers (running and stopped).
docker logs <container_name>Show the logs (console output) of a container.
docker logs -f <container_name>Follow the logs in real-time (streams the live output).
docker inspect <container_name>Show detailed information (JSON) about a container, including its IP address, port mappings, and volumes.
docker container statsShow a live stream of resource usage (CPU, Memory, Network) for all running containers.

Docker Hub & Registries

Commands for authenticating and sharing your custom images.

CommandDescription
docker loginLog in to Docker Hub or another container registry. Youโ€™ll be prompted for your credentials.
docker push <username>/<image_name>Push (upload) your local image to Docker Hub. The image must be tagged with your username first (e.g., docker build -t myuser/my-app .).

System Cleanup (QOL)

Essential commands for freeing up disk space.

CommandDescription
docker container pruneRemove all stopped containers.
docker image pruneRemove dangling images (images that arenโ€™t tagged or used by any container).
docker image prune -aRemove all unused images (any image not used by at least one container).
docker volume pruneRemove all unused volumes (volumes not attached to any container).
docker system pruneThe โ€œbig oneโ€: Removes all stopped containers, all dangling images, and all unused networks.
docker system prune -a --volumesThe โ€œnukeโ€: Removes all stopped containers, all unused images (not just dangling), all unused networks, and all unused volumes.

Docker Compose (Advanced)

The standard tool for defining and running multi-container applications (e.g., a web app, a database, and a cache). It uses a docker-compose.yml file.

CommandDescription
docker compose upBuild and start all services defined in your docker-compose.yml file. Runs in the foreground.
docker compose up -dBuild and start all services in detached mode (in the background).
docker compose downStop and remove all containers, networks, and (by default) default volumes defined in the compose file.
docker compose down -vStop and remove everything, including named volumes.
docker compose psList all containers managed by the current compose project.
docker compose logsShow logs from all services in the compose project.
docker compose logs -f <service_name>Follow the logs in real-time for one or more specific services.
docker compose exec <service_name> shExecute a command (like sh) inside a running serviceโ€™s container.
docker compose buildForce a rebuild of the images for your services before starting.

Volumes & Networking (Advanced)

Commands for explicitly managing persistent data and custom networks.

CommandDescription
docker volume lsList all volumes on your system.
docker volume create <volume_name>Create a new managed volume.
docker volume inspect <volume_name>Show detailed information about a volume.
docker volume rm <volume_name>Remove one or more volumes.
docker network lsList all networks on your system.
docker network create <network_name>Create a new custom bridge network. Containers on the same network can communicate by name.
docker network inspect <network_name>Show detailed information about a network.
docker network connect <net> <container>Connect a running container to an additional network.

Git

๐Ÿš€ Initial Configuration

Set these up once on any new machine.

  • git config --global user.name "Your Name"
    • Sets the name that will appear on your commits.
  • git config --global user.email "you@example.com"
    • Sets the email for your commits.
  • git config --global init.defaultBranch main
    • Sets the default branch name to main for new repos.
  • git config --global alias.lg "log --graph --oneline --decorate --all"
    • Creates a git lg shortcut for a clean, comprehensive log.
  • git config --global alias.st "status -s"
    • Creates a git st shortcut for a short, one-line status.

๐Ÿ“ฆ Basic Workflow: Staging & Committing

This is your day-to-day command cycle.

  • git init
    • Initializes a new Git repository in the current directory.
  • git status
    • Shows the status of your working directory and staging area (untracked, modified, and staged files).
  • git add <file...>
    • Adds one or more specific files to the staging area.
    • Example: git add README.md package.json
  • git add .
    • Adds all new and modified files in the current directory to the staging area.
  • git add -p
    • Interactively stages parts of files. Git will show you each โ€œhunkโ€ of changes and ask if you want to stage it (y/n/q).
  • git commit -m "Your descriptive message"
    • Saves a permanent snapshot of the staged files to the project history.
  • git commit -am "Your message"
    • A shortcut to stage all tracked files and commit them in one step. (Note: Does not add new, untracked files).
  • git rm <file>
    • Removes a file from both the working directory and the staging area.
  • git rm --cached <file>
    • Removes a file from the staging area (index) but keeps the file in your working directory. Useful for โ€œuntrackingโ€ a file, like a config file you accidentally added.
  • git mv <old-name> <new-name>
    • Renames a file. This is equivalent to mv <old> <new>, git rm <old>, and git add <new>.

๐Ÿ“œ Inspecting History & Logs

See what has happened in the project.

  • git log
    • Shows the full commit history for the current branch.
  • git log --oneline
    • Shows a compact, one-line view of the commit history.
  • git lg (or git log --graph --oneline --decorate --all)
    • A powerful, customized log (using the alias from setup) that shows all branches, commit graphs, and tags in a clean one-line format.
  • git log -p <file>
    • Shows the commit history for a specific file, including the changes (patches) made in each commit.
  • git reflog
    • Shows a log of all movements of HEAD (commits, checkouts, resets, merges). This is your ultimate safety net for finding โ€œlostโ€ commits.

๐ŸŒฟ Branching & Merging

Manage parallel lines of development.

Branching

  • git branch
    • Lists all your local branches.
  • git branch -a
    • Lists all local and remote-tracking branches.
  • git branch <branch-name>
    • Creates a new branch based on your current HEAD.
  • git checkout <branch-name>
    • Switches your working directory to the specified branch.
  • git checkout -b <branch-name>
    • A shortcut to create a new branch and switch to it immediately.
  • git branch -m <new-name>
    • Renames the current branch.
  • git branch -d <branch-name>
    • Deletes a merged local branch. Git will stop you if the branch isnโ€™t merged (safety feature).
  • git branch -D <branch-name>
    • Force deletes a local branch, even if itโ€™s not merged.

Merging & Rebasing

  • git merge <branch-name>
    • Merges the specified branch into your current branch. This creates a new โ€œmerge commitโ€ if there are new commits on both branches (a non-fast-forward merge).
  • git rebase <branch-name>
    • Re-applies your current branchโ€™s commits on top of the specified branch. This creates a cleaner, linear history.
    • Example: Youโ€™re on feature and main has updated. Run git rebase main to move your feature work to the tip of main.
  • git rebase -i HEAD~3
    • Interactively rebase the last 3 commits. This opens an editor allowing you to squash (combine), reword (change message), edit, drop, or reorder commits.

๐Ÿ“ฅ Stashing

Temporarily save changes you arenโ€™t ready to commit.

  • git stash or git stash save "Your message"
    • Takes all your uncommitted changes (in tracked files), saves them, and cleans your working directory back to HEAD.
  • git stash list
    • Shows all stashes youโ€™ve saved.
  • git stash pop
    • Applies the most recent stash to your working directory and deletes it from the stash list.
  • git stash apply <stash@{n}>
    • Applies a specific stash (e.g., stash@{1}) but does not delete it from the list.
  • git stash drop <stash@{n}>
    • Deletes a specific stash from the list.

๐Ÿ“ก Remote Repositories (e.g., GitHub)

Manage connections to other repositories.

Managing Remotes

  • git remote add <name> <url>
    • Adds a new remote. The standard name is origin.
    • Example: git remote add origin https://github.com/user/repo.git
  • git remote -v
    • Lists all your remotes with their URLs.
  • git remote rename <old-name> <new-name>
    • Renames a remote.
  • git remote remove <name>
    • Removes a remote.

Syncing Changes

  • git fetch <remote-name>
    • Downloads all branches and history from the remote without merging them into your local branches. This is safe and lets you inspect changes first.
  • git pull <remote-name> <branch-name>
    • A shortcut for git fetch followed by git merge. It fetches and immediately tries to merge the remote branch into your current local branch.
    • Example: git pull origin main
  • git push <remote-name> <branch-name>
    • Uploads your local branchโ€™s commits to the remote repository.
  • git push -u <remote-name> <branch-name>
    • Pushes and sets the remote as the โ€œupstreamโ€ tracking branch. After this, you can just run git pull or git push from that branch.
  • git push <remote-name> --delete <branch-name>
    • Deletes a branch on the remote repository.
  • git push --force-with-lease
    • โš ๏ธ Force-pushes your local branch, overwriting the remote. This is safer than git push --force because it will fail if someone else has pushed new commits in the meantime. Use this only when you have rewritten history (e.g., rebase) and have coordinated with your team.

โ†ฉ๏ธ Undoing & Rewriting History

How to fix mistakes โ€œafter the fact.โ€

Before Committing (Working Directory / Staging)

  • git restore <file>
    • Discards changes in your working directory. (The modern, clearer version of git checkout -- <file>).
  • git restore --staged <file>
    • Unstages a file, moving it from the staging area back to the working directory. (The modern version of git reset HEAD <file>).

After Committing (But Before Pushing)

  • git commit --amend
    • Lets you change the last commitโ€™s message or add more staged files to it. It replaces the last commit with a new one.
  • git reset --soft HEAD~1
    • Un-commits the last commit. The changes from that commit are moved to the staging area.
  • git reset --mixed HEAD~1 (This is the default)
    • Un-commits the last commit. The changes are moved to the working directory (unstaged).
  • git reset --hard HEAD~1
    • โš ๏ธ Destroys the last commit and all changes associated with it. Your working directory is reset to the state of the commit before it. This is permanent.
  • git reset --hard <commit-hash>
    • โš ๏ธ Resets your entire project (working directory and index) to a specific commit. Discards all subsequent commits and changes.

After Pushing (Public Commits)

  • git revert <commit-hash>
    • The safe way to โ€œundoโ€ a public commit. This creates a new commit that is the exact inverse of the specified commit. It doesnโ€™t rewrite history.
  • git revert -m 1 <merge-commit-hash>
    • Reverts a merge commit. -m 1 tells Git which parent to keep (usually 1).
  • Changing a Pushed Commit Message:
    • This is highly disruptive to your team. Avoid if possible.
    1. git rebase -i HEAD~5 (Go back far enough to find the commit)
    2. Find the commit line, change pick to reword (or r).
    3. Save and close. Git will prompt you to enter the new message.
    4. git push --force-with-lease
    • You must force-push because youโ€™ve rewritten public history. All collaborators will need to re-sync their branches.

๐Ÿ› ๏ธ Advanced Tools

Git Worktrees

Manage multiple branches in separate directories simultaneously.

  • git clone --bare . /path/to/my-bare-repo.git
    • Clone the current repository as a bare repository
  • git worktree add <path> <branch-name>
    • Checks out a branch into a new directory. This is great for working on a hotfix while keeping your main feature branch checked out in your primary folder.
    • Example: git worktree add ../my-hotfix-branch hotfix
  • git worktree list
    • Shows all active worktrees.
  • git worktree remove <path>
    • Removes the worktree at the specified path.

Git Submodules

Manage a repository inside another repository.

  • git submodule add <repo-url> <path>
    • Adds the other repo as a submodule in the specified path.
  • git clone --recurse-submodules <repo-url>
    • Clones a repository and automatically initializes and updates all its submodules.
  • git submodule update --init --recursive
    • Run this after a normal git clone (or git pull) to initialize or update submodules.
  • Workflow for updating a submodule:
    1. cd <submodule-path>
    2. git checkout main (or desired branch)
    3. git pull
    4. cd .. (back to the parent repo)
    5. git add <submodule-path>
    6. git commit -m "Update submodule to latest"
    • This โ€œparentโ€ commit locks the submodule to the new commit hash you just pulled.

Relational

MySQL

MySQL Usage Guide

DATABASE

Creating, using, and managing databases.

-- Create a new database named myDB
CREATE DATABASE myDB; 

-- Switch to the newly created database
USE myDB; 

-- Delete the myDB database
DROP DATABASE myDB; 

-- Set the myDB database to read-only mode
ALTER DATABASE myDB READ ONLY = 1; 

-- Reset the read-only mode of the myDB database
ALTER DATABASE myDB READ ONLY = 0; 

TABLES

Creating and modifying tables to organize data.

-- Create an 'employees' table with specified columns
CREATE TABLE employees(
    employee_id INT,
    first_name VARCHAR(50),
    last_name VARCHAR(50),
    hourly_pay DECIMAL(5, 2),
    hire_date DATE
);

-- Retrieve all data from the 'employees' table
SELECT * FROM employees; 

-- Rename the 'employees' table to 'workers'
RENAME TABLE employees TO workers; 

-- Delete the 'employees' table
DROP TABLE employees; 

Altering Tables

-- Add a new column 'phone_number' to the 'employees' table
ALTER TABLE employees
ADD phone_number VARCHAR(15);

-- Rename the 'phone_number' column to 'email'
ALTER TABLE employees
RENAME COLUMN phone_number TO email;

-- Change the data type of the 'email' column
ALTER TABLE employees
MODIFY COLUMN email VARCHAR(100);

-- Change the position of the 'email' column
ALTER TABLE employees
MODIFY email VARCHAR(100) AFTER last_name;

-- Move the 'email' column to the first position
ALTER TABLE employees
MODIFY email VARCHAR(100) FIRST;

-- Delete the 'email' column
ALTER TABLE employees
DROP COLUMN email;

INSERT ROW

Inserting data into tables.

-- Insert a single row into the 'employees' table
INSERT INTO employees VALUES(1, "Akib", "Ahmed", 25.90, "2024-04-06");

-- Insert multiple rows into the 'employees' table
INSERT INTO employees VALUES 
(2, "Sakib", "Ahmed", 20.10, "2024-04-06"),
(3, "Rakib", "Ahmed", 16.40, "2024-04-06"),
(4, "Mula", "Ahmed", 10.90, "2024-04-06"),
(5, "Kodhu", "Ahmed", 19.70, "2024-04-06"),
(6, "Lula", "Ahmed", 23.09, "2024-04-06");

-- Insert specific fields into the 'employees' table
INSERT INTO employees (employee_id, first_name, last_name) VALUES(6, "Munia", "Khatun");

SELECT

Retrieving data from tables.

-- Retrieve all data from the 'employees' table
SELECT * FROM employees;

-- Retrieve specific fields from the 'employees' table
SELECT first_name, last_name FROM employees;

-- Retrieve data from the 'employees' table based on a condition
SELECT * FROM employees WHERE employee_id <= 2;

-- Retrieve data where the 'hire_date' column is NULL
SELECT * FROM employees WHERE hire_date IS NULL;

-- Retrieve data where the 'hire_date' column is not NULL
SELECT * FROM employees WHERE hire_date IS NOT NULL;

UPDATE & DELETE

Modifying and deleting data.

-- Update data in the 'employees' table based on a condition
UPDATE employees
SET hourly_pay = 10.3, hire_date = "2024-01-05"
WHERE employee_id = 7;

-- Update all rows in the 'employees' table for the 'hourly_pay' column
UPDATE employees
SET hourly_pay = 10.3;

-- Delete rows from the 'employees' table where 'hourly_pay' is NULL
DELETE FROM employees
WHERE hourly_pay IS NULL;

-- Delete the 'date_time' column from the 'employees' table
ALTER TABLE employees
DROP COLUMN date_time;

AUTO-COMMIT, COMMIT & ROLLBACK

Managing transactions.

-- Turn off auto-commit mode
SET AUTOCOMMIT = OFF;

-- Manually save changes made in the current transaction
COMMIT;

-- Delete all data from the 'employees' table
DELETE FROM employees;

-- Roll back changes made in the current transaction
ROLLBACK;

DATE & TIME

Working with date and time data.

-- Add a 'join_time' column to the 'employees' table
ALTER TABLE employees
ADD COLUMN join_time TIME;

-- Update the 'join_time' column with the current time
UPDATE employees
SET join_time = CURRENT_TIME();

-- Update the 'hire_date' column based on a condition
UPDATE employees
SET hire_date = CURRENT_DATE() + 1
WHERE hourly_pay >= 20;

-- Add a 'date_time' column to the 'employees' table
ALTER TABLE employees
ADD COLUMN date_time DATETIME;

-- Update the 'date_time' column with the current date and time
UPDATE employees
SET date_time = NOW();

-- Change the name of the 'hire_date' column to 'hire_date'
ALTER TABLE employees
CHANGE COLUMN hire_date hire_date DATE;

CONSTRAINTS

Ensuring data integrity with constraints.

UNIQUE

-- Create a 'products' table with a unique constraint on the 'product_name' column
CREATE TABLE products(
    product_id INT,
    product_name VARCHAR(50) UNIQUE,
    product_price DECIMAL(4,2)
);

-- Add a unique constraint to the 'product_name' column in the 'products' table
ALTER TABLE products
ADD CONSTRAINT UNIQUE(product_name);

-- Insert data into the 'products' table
INSERT INTO products VALUES
(1, "tea", 15.9),
(2, "coffee", 20.89),
(3, "lemon", 10.10);

NOT NULL

-- Create a 'products' table with a NOT NULL constraint on the 'product_price' column
CREATE TABLE products(
    product_id INT,
    product_name VARCHAR(50) UNIQUE,
    product_price DECIMAL(4,2) NOT NULL
);

-- Update the 'product_price' column to be NOT NULL
ALTER TABLE products
MODIFY product_price DECIMAL(4,2) NOT NULL;

-- Insert data into the 'products' table with a NOT NULL column
INSERT INTO products VALUES(4, "mango", 0);

CHECK

-- Create an 'employees' table with a check constraint on the 'hourly_pay' column
CREATE TABLE employees(
    employee_id INT,
    first_name VARCHAR(50),
    last_name VARCHAR(50),
    hourly_pay DECIMAL(5, 2),
    hire_date DATE,
    CONSTRAINT chk_hourly_pay CHECK (hourly_pay >= 10)
);

-- Add a check constraint to the 'hourly_pay' column
ALTER TABLE employees
ADD CONSTRAINT chk_hourly_pay CHECK(hourly_pay >= 10);

-- Insert data into the 'employees' table
INSERT INTO employees VALUES(7, "Kutta", "Mizan", 10.0, CURRENT_DATE(), CURRENT_TIME());

DEFAULT

-- Create a 'products' table with a default value for the 'product_price' column
CREATE TABLE products(
    product_id INT,
    product_name VARCHAR(50) UNIQUE,
    product_price DECIMAL(4,2) DEFAULT 0
);

-- Set the default value for the 'product_price' column
ALTER TABLE products
ALTER product_price SET DEFAULT 0;

-- Insert data into the 'products' table with a default value
INSERT INTO products (product_id, product_name) VALUES(5, "soda");

-- Create a 'transactions' table with a default value for the 'transaction_date' column
CREATE TABLE transactions(
    transaction_id INT,
    amount DECIMAL(5,2),
    transaction_date DATETIME DEFAULT NOW()
);

PRIMARY KEY

-- Create a table for transactions with a primary key
CREATE TABLE transactions(
    transaction_id INT PRIMARY KEY,
    amount DECIMAL(4,2),
    transaction_date DATETIME
);

-- Add a primary key constraint
ALTER TABLE transactions
ADD CONSTRAINT PRIMARY KEY(transaction_id);

-- Set auto-increment for the primary key
ALTER TABLE transactions AUTO_INCREMENT = 1000;

-- Insert data into the transactions table
INSERT INTO transactions(amount) VALUES (54.20);

-- Select all data from the transactions table
SELECT * FROM transactions;

AUTO_INCREMENT

-- Create a table for transactions with an auto-increment primary key
CREATE TABLE transactions(
    transaction_id INT PRIMARY KEY AUTO_INCREMENT,
    amount DECIMAL(5,2),
    transaction_date DATETIME DEFAULT NOW()
);

-- Set the default increment level
ALTER TABLE transactions AUTO_INCREMENT = 1000;

-- Insert data into the transactions table, auto-increment starts from 1000
INSERT INTO transactions(amount) VALUES (45.20), (23.40), (98.00), (43.45);

-- Select all data from the transactions table
SELECT * FROM transactions;

FOREIGN KEY

-- Create a table for customers with a primary key
CREATE TABLE customers(
    customer_id INT PRIMARY KEY AUTO_INCREMENT,
    first_name VARCHAR(50),
    last_name VARCHAR(50)
);

-- Create a table for

 transactions with a foreign key constraint
CREATE TABLE transactions(
    transaction_id INT PRIMARY KEY AUTO_INCREMENT,
    amount DECIMAL(5,2),
    transaction_date DATETIME DEFAULT NOW(),
    customer_id INT,
    FOREIGN KEY(customer_id) REFERENCES customers(customer_id)
);

-- Add a foreign key constraint to the transactions table
ALTER TABLE transactions
ADD CONSTRAINT fk_customer_key
FOREIGN KEY(customer_id) REFERENCES customers(customer_id);

-- Insert data into the transactions table with customer_id
INSERT INTO transactions(amount, customer_id) VALUES (34.34, 1), (123.4, 3), (32.32, 1), (12.00, 2);

JOIN

Combining data from multiple tables.

-- Inner join transactions and customers tables
SELECT * 
FROM transactions 
INNER JOIN customers
ON transactions.customer_id = customers.customer_id;

-- Select specific fields from joined tables
SELECT transaction_id, transaction_date, first_name, last_name
FROM transactions 
INNER JOIN customers
ON transactions.customer_id = customers.customer_id;

-- Left join transactions and customers tables
SELECT *
FROM transactions 
LEFT JOIN customers
ON transactions.customer_id = customers.customer_id;

-- Right join transactions and customers tables
SELECT *
FROM transactions 
RIGHT JOIN customers
ON transactions.customer_id = customers.customer_id;

FUNCTIONS

Built-in SQL functions.

-- Count the number of transactions
SELECT COUNT(amount) AS "Transaction count" FROM transactions;

-- Find the maximum amount
SELECT MAX(amount) AS max_dollar FROM transactions;

-- Find the minimum amount
SELECT MIN(amount) AS min_dollar FROM transactions;

-- Find the average amount
SELECT AVG(amount) AS avg_dollar FROM transactions;

-- Calculate the total amount
SELECT SUM(amount) AS sum_of_dollar FROM transactions;

-- Concatenate first_name and last_name into a new column
SELECT CONCAT(first_name, " ", last_name) as full_name FROM customers;

AND, OR & NOT

Combining conditions in SQL queries.

-- Add a job column to the employees table
ALTER TABLE employees
ADD COLUMN job VARCHAR(50) AFTER hourly_pay;

-- Update job data based on employee_id
UPDATE employees
SET job = "Programmer" 
WHERE employee_id = 1;

-- Select employees with specific conditions
SELECT * FROM employees
WHERE employee_id >= 2 AND employee_id <= 6 AND job = "vendor";

-- Select employees with specific conditions using OR
SELECT * FROM employees
WHERE job = "programmer" OR job = "vendor";

-- Select employees with specific conditions using NOT
SELECT * FROM employees
WHERE NOT job = "programmer" AND NOT job = "vendor";

-- Select employees within a certain hourly pay range
SELECT * FROM employees
WHERE hourly_pay BETWEEN 15 AND 26;

-- Select employees with specific jobs using IN
SELECT * FROM employees
WHERE job IN("programmer", "vendor", "doctor");

WILD-CARDS

Using wildcards for pattern matching.

-- Select employees with first name ending with "hu"
SELECT * FROM employees
WHERE first_name LIKE "%hu";

-- Select employees hired on a specific day (07)
SELECT * FROM employees
WHERE hire_date LIKE "____-__-07";

-- Select employees with job ending with "e" followed by another character
SELECT * FROM employees
WHERE job LIKE "%e_";

ORDER BY

Sorting query results.

-- Select employees ordered by hourly pay in ascending order
SELECT * FROM employees
ORDER BY hourly_pay ASC;

-- Select employees ordered by hire date in descending order
SELECT * FROM employees
ORDER BY hire_date DESC;

-- Select transactions ordered by amount in descending order and customer_id in ascending order
SELECT * FROM transactions
ORDER BY amount DESC, customer_id ASC;

LIMIT

Limiting the number of records returned.

-- Select the first 3 customers
SELECT * FROM customers
LIMIT 3;

-- Select the last 3 customers ordered by customer_id
SELECT * FROM customers
ORDER BY customer_id DESC LIMIT 3;

-- Select 2 customers starting from the 1st position (pagination)
SELECT * FROM customers
LIMIT 0,2;

UNION

Combining results from multiple SELECT statements.

-- Combine unique first and last names from employees and customers
SELECT first_name, last_name FROM employees
UNION
SELECT first_name, last_name FROM customers;

-- Combine all first and last names from employees and customers, including duplicates
SELECT first_name, last_name FROM employees
UNION ALL
SELECT first_name, last_name FROM customers;

SELF JOIN

Joining a table to itself.

-- Add a referral_id column to the customers table
ALTER TABLE customers
ADD COLUMN referral_id INT;

-- Update referral_id for customers
UPDATE customers
SET referral_id = 1
WHERE customer_id = 2;

-- Self join to show referred customers
SELECT a.customer_id, a.first_name, a.last_name,
       CONCAT(b.first_name, " ", b.last_name) AS "referred_by"
FROM customers AS a
INNER JOIN customers AS b
ON a.referral_id = b.customer_id;

-- Add a supervisor_id column to the employees table
ALTER TABLE employees
ADD supervisor_id INT;

-- Update supervisor_id for employees
UPDATE employees
SET supervisor_id = 7 
WHERE employee_id BETWEEN 2 and 6;

-- Update supervisor_id for a specific employee
UPDATE employees
SET supervisor_id = 1 
WHERE employee_id = 7;

-- Self join to show employees and their supervisors
SELECT a.employee_id, a.first_name, a.last_name,
       CONCAT(b.first_name, " ", b.last_name) AS "reports to"
FROM employees AS a
INNER JOIN employees AS b
ON a.supervisor_id = b.employee_id;

VIEWS

Creating and using virtual tables.

-- Create a view based on the employees table
CREATE VIEW employee_attendance AS
SELECT first_name, last_name
FROM employees;

-- Retrieve data from the view
SELECT * FROM employee_attendance
ORDER BY last_name ASC;

-- Create a view for customer emails
CREATE VIEW customer_emails AS
SELECT email
FROM customers;

-- Insert data into the customers table and view the changes in the view
INSERT INTO customers
VALUES(6, "Musa", "Rahman", NULL, "musa@mail.com");
SELECT * FROM customers;
SELECT * FROM customer_emails;

INDEX

Improving query performance with indexes.

-- Show indexes for the customers table
SHOW INDEXES FROM customers;

-- Create an index on the last_name column
CREATE INDEX last_name_index
ON customers(last_name);

-- Use the index to speed up search
SELECT * FROM customers
WHERE last_name = "Chan";

-- Create a multi-column index
CREATE INDEX last_name_first_name_idx
ON customers(last_name, first_name);

-- Drop an index
ALTER TABLE customers
DROP INDEX last_name_index;

-- Benefit from the multi-column index during search
SELECT * FROM customers
WHERE last_name = "Chan" AND first_name = "Kuki";

SUB-QUERY

Using sub-queries to nest queries within queries.

-- Get the average hourly pay
SELECT AVG(hourly_pay) FROM employees;

-- Use a sub-query to get the average hourly pay within a larger query
SELECT first_name, last_name, hourly_pay,
       (SELECT AVG(hourly_pay) FROM employees) AS avg_hourly_pay
FROM employees;

-- Filter rows based on a sub-query result
SELECT first_name, last_name, hourly_pay 
FROM employees
WHERE hourly_pay >= (SELECT AVG(hourly_pay) FROM employees);

-- Use a sub-query with IN to filter customers
SELECT first_name, last_name
FROM customers
WHERE customer_id IN (SELECT DISTINCT customer_id
                      FROM transactions
                      WHERE customer_id IS NOT NULL);

-- Use a sub-query with NOT IN to filter customers
SELECT first_name, last_name
FROM customers
WHERE customer_id NOT IN (SELECT DISTINCT customer_id
                          FROM transactions
                          WHERE customer_id IS NOT NULL);

GROUP BY

Aggregating data with grouping.

-- Sum amounts grouped by transaction date
SELECT SUM(amount), transaction_date
FROM transactions
GROUP BY transaction_date;

-- Get the maximum amount per customer
SELECT MAX(amount), customer_id
FROM transactions
GROUP BY customer_id;

-- Count transactions per customer having more than one transaction
SELECT COUNT(amount), customer_id
FROM transactions
GROUP BY customer_id
HAVING COUNT(amount) > 1 AND customer_id IS NOT NULL;

ROLL-UP

Extending group by with roll-up for super-aggregate values.

-- Sum amounts with a roll-up
SELECT SUM(amount), transaction_date
FROM transactions
GROUP BY transaction_date WITH ROLLUP;

-- Count transactions with a roll-up
SELECT COUNT(transaction_id) AS "# of orders", customer_id
FROM transactions 
GROUP BY customer_id WITH ROLLUP;

-- Sum hourly pay with a roll-up
SELECT SUM(hourly_pay) AS "hourly pay", employee_id
FROM employees
GROUP BY employee_id WITH ROLLUP;

ON-DELETE

Handling foreign key deletions.

-- Delete a customer record
DELETE FROM customers
WHERE customer_id = 3;

-- Disable foreign key checks and delete a customer
SET foreign_key_checks = 0;
DELETE FROM customers
WHERE customer_id = 3;
SET foreign_key_checks = 1;

-- Insert a customer record
INSERT INTO customers
VALUES(3, "Shilpi", "Akter", 3, "shilpy@mail.com");

-- Create a table with ON DELETE SET NULL
CREATE TABLE transactions(
    transaction_id INT PRIMARY KEY,
    amount DECIMAL(5,

3),
    customer_id INT,
    order_date DATE,
    FOREIGN KEY(customer_id) REFERENCES customers(customer_id)
    ON DELETE SET NULL
);

-- Update an existing table with ON DELETE SET NULL
ALTER TABLE transactions 
DROP FOREIGN KEY fk_customer_key;
ALTER TABLE transactions 
ADD CONSTRAINT fk_customer_key
FOREIGN KEY(customer_id) REFERENCES customers(customer_id)
ON DELETE SET NULL;

-- Create or alter a table with ON DELETE CASCADE
ALTER TABLE transactions
ADD CONSTRAINT fk_transaction_id
FOREIGN KEY(customer_id) REFERENCES customers(customer_id)
ON DELETE CASCADE;

STORED PROCEDURE

Creating reusable SQL code blocks.

-- Create a procedure
DELIMITER $$
CREATE PROCEDURE get_customers()
BEGIN
    SELECT * FROM customers;
END $$
DELIMITER ;

-- Delete a procedure
DROP PROCEDURE get_customers;

-- Create a procedure with an argument
DELIMITER $$
CREATE PROCEDURE find_customer(IN id INT)
BEGIN
    SELECT * FROM customers WHERE customer_id = id;
END $$
DELIMITER ;

-- Create a procedure with multiple arguments
DELIMITER $$
CREATE PROCEDURE find_customer(IN f_name VARCHAR(50), IN l_name VARCHAR(50))
BEGIN 
    SELECT * FROM customers WHERE first_name = f_name AND last_name = l_name;
END $$
DELIMITER ;

-- Call a procedure
CALL find_customer("Akib", "Ahmed");

TRIGGERS

Automatically performing actions in response to events.

-- Add a salary column to the employees table
ALTER TABLE employees
ADD COLUMN salary DECIMAL(10,2) AFTER hourly_pay;

-- Calculate salary based on hourly pay
UPDATE employees
SET salary = hourly_pay * 2080;

-- Create a trigger to update salary before updating hourly pay
CREATE TRIGGER before_hourly_pay_update
BEFORE UPDATE ON employees
FOR EACH ROW
SET NEW.salary = (NEW.hourly_pay * 2080);

-- Update hourly pay and see the trigger in action
UPDATE employees 
SET hourly_pay = 50
WHERE employee_id = 1;

-- Create a trigger to update salary before inserting a new employee
CREATE TRIGGER before_hourly_pay_insert
BEFORE INSERT ON employees
FOR EACH ROW
SET NEW.salary = (NEW.hourly_pay * 2080);

-- Insert a new employee and see the trigger in action
INSERT INTO employees
VALUES(6, "Shel", "Plankton", 10, NULL, "Janitor", "2024-06-17", "09:22:23", 7);

-- Create a table for expenses
CREATE TABLE expenses(
    expense_id INT PRIMARY KEY,
    expense_name VARCHAR(50),
    expense_total DECIMAL(10,2)
);

-- Insert initial data into the expenses table
INSERT INTO expenses
VALUES (1, "salaries", 0), (2, "supplies", 0), (3, "taxes", 0);

-- Update expenses based on salaries
UPDATE expenses 
SET expense_total = (SELECT SUM(salary) FROM employees)
WHERE expense_name = "salaries";

-- Create a trigger to update expenses after deleting an employee
CREATE TRIGGER after_salary_delete
AFTER DELETE ON employees
FOR EACH ROW
UPDATE expenses
SET expense_total = expense_total - OLD.salary
WHERE expense_name = "salaries";

-- Delete an employee and see the trigger in action
DELETE FROM employees
WHERE employee_id = 6;

-- Create a trigger to update expenses after inserting a new employee
CREATE TRIGGER after_salary_insert
AFTER INSERT ON employees
FOR EACH ROW
UPDATE expenses
SET expense_total = expense_total + NEW.salary
WHERE expense_name = "salaries";

-- Insert a new employee and see the trigger in action
INSERT INTO employees
VALUES(6, "Shel", "Plankton", 10, NULL, "Janitor", "2024-06-17", "09:22:23", 7);

-- Create a trigger to update expenses after updating an employee's salary
CREATE TRIGGER after_salary_update
AFTER UPDATE ON employees
FOR EACH ROW
UPDATE expenses
SET expense_total = expense_total + (NEW.salary - OLD.salary)
WHERE expense_name = "salaries";

-- Update an employee's hourly pay and see the trigger in action
UPDATE employees
SET hourly_pay = 100
WHERE employee_id = 1;

PostgreSQL

PostgreSQL Quick Guide

A concise guide to common PostgreSQL commands, syntax, and concepts.

Key Differences from MySQL:

  • Strings: Use single quotes only (e.g., 'Hello World').
  • Identifiers: (Table/column names) are case-insensitive unless you wrap them in double quotes (e.g., "myColumn").
  • Switching DBs: There is no USE db_name; command. In the psql terminal, use the \c db_name meta-command.
  • Auto-Increment: Use the SERIAL or GENERATED AS IDENTITY keyword.
  • Concatenation: The standard SQL || operator is preferred (e.g., first_name || ' ' || last_name).

psql Command Line Basics

psql is the interactive terminal for PostgreSQL.

Connecting:

# Connect to a specific database as a specific user
psql -d myDB -U myUser -h localhost

Common Meta-Commands (start with \):

  • \l: List all databases.
  • \c db_name: Connect to a different database.
  • \dt: List all tables in the current database.
  • \d table_name: Describe a table (columns, indexes, constraints).
  • \dn: List all schemas.
  • \df: List all functions.
  • \du: List all users (roles).
  • \timing: Toggles query execution time display.
  • \e: Open the last query in your text editor.
  • \q: Quit psql.

Database & Role Management

Manage databases and user permissions.

-- Create a new database
CREATE DATABASE myDB;

-- Delete a database
DROP DATABASE myDB;

-- Create a new user (role) with login permission
CREATE ROLE myUser WITH LOGIN PASSWORD 'my_password';

-- Grant privileges for a user on a table
GRANT ALL ON employees TO myUser;

-- Grant privileges to connect to a database
GRANT CONNECT ON DATABASE myDB TO myUser;

Tables & Data Types

Create, modify, and delete tables.

-- Create a table with common data types
CREATE TABLE employees (
    employee_id SERIAL PRIMARY KEY, -- Auto-incrementing primary key
    first_name VARCHAR(50) NOT NULL,
    hourly_pay NUMERIC(5, 2) DEFAULT 10.00, -- Equivalent to DECIMAL
    hire_date DATE DEFAULT CURRENT_DATE,
    created_at TIMESTAMP DEFAULT NOW()
);

-- Modify an existing table
ALTER TABLE employees
    ADD COLUMN email VARCHAR(100) UNIQUE,
    RENAME COLUMN hire_date TO joined_date,
    ALTER COLUMN hourly_pay TYPE NUMERIC(6, 2),
    DROP COLUMN some_old_column;

-- Rename a table
ALTER TABLE employees RENAME TO workers;

-- Delete a table
DROP TABLE employees;

Note: PostgreSQL does not support reordering columns (like AFTER or FIRST). You must recreate the table.


Constraints

Rules to ensure data integrity, best defined at creation.

CREATE TABLE products (
    product_id SERIAL PRIMARY KEY,
    product_name VARCHAR(50) UNIQUE NOT NULL,
    price NUMERIC(6, 2) DEFAULT 0,
    category_id INT,

    -- Check constraint
    CONSTRAINT chk_price CHECK (price >= 0),

    -- Foreign key constraint with actions
    CONSTRAINT fk_category
        FOREIGN KEY(category_id)
        REFERENCES categories(category_id)
        ON DELETE SET NULL -- or ON DELETE CASCADE
);

-- Add a constraint to an existing table
ALTER TABLE employees
ADD CONSTRAINT chk_hourly_pay CHECK(hourly_pay >= 10.00);

Manipulating Data (CRUD)

The four basic data operations: Create, Read, Update, Delete.

-- CREATE (Insert)
-- Insert a single row (best practice to name columns)
INSERT INTO employees (first_name, last_name, hourly_pay)
VALUES ('Akib', 'Ahmed', 25.90);

-- Insert multiple rows
INSERT INTO employees (first_name, last_name, hourly_pay) VALUES
('Sakib', 'Ahmed', 20.10),
('Rakib', 'Ahmed', 16.40);

-- READ (Select)
SELECT * FROM employees;

-- UPDATE (Update)
UPDATE employees
SET hourly_pay = 27.50, email = 'akib@mail.com'
WHERE employee_id = 1;

-- DELETE (Delete)
DELETE FROM employees
WHERE employee_id = 1;

Transactions

Ensure that a group of SQL statements either all succeed or all fail together.

-- Start a transaction block
BEGIN;

-- Make changes
UPDATE employees SET hourly_pay = 99.00 WHERE employee_id = 2;
DELETE FROM employees WHERE employee_id = 3;

-- To undo the changes in this block
ROLLBACK;

-- To make the changes permanent
COMMIT;

Querying: Filtering & Sorting

Use SELECT to retrieve data with complex conditions.

SELECT
    first_name || ' ' || last_name AS full_name,
    hourly_pay,
    joined_date
FROM employees
WHERE
    (hourly_pay > 20 OR job IS NULL)
    AND first_name ILIKE 'a%' -- ILIKE is case-insensitive LIKE
ORDER BY
    joined_date DESC,
    first_name ASC
LIMIT 10 OFFSET 5; -- Skip 5 rows, fetch the next 10 (for pagination)

Querying: Aggregation

Summarize data using aggregate functions and GROUP BY.

SELECT
    job,
    COUNT(employee_id) AS "employee_count",
    AVG(hourly_pay) AS avg_pay,
    SUM(hourly_pay) AS total_payroll
FROM employees
WHERE joined_date > '2023-01-01'
GROUP BY job
HAVING COUNT(employee_id) > 2 -- Filter groups, not rows
ORDER BY avg_pay DESC;

-- Use ROLLUP to get a grand total row
SELECT job, SUM(hourly_pay)
FROM employees
GROUP BY ROLLUP(job); -- Will add a final row with the total sum

Querying: Joins

Combine rows from two or more tables.

-- INNER JOIN: Returns only matching rows from both tables
SELECT e.first_name, t.amount
FROM employees AS e
INNER JOIN transactions AS t ON e.employee_id = t.employee_id;

-- LEFT JOIN: Returns all rows from the left (employees) table,
-- and matching rows from the right (transactions) table.
SELECT e.first_name, t.amount
FROM employees AS e
LEFT JOIN transactions AS t ON e.employee_id = t.employee_id;

-- SELF JOIN: Join a table to itself
SELECT a.first_name AS employee, b.first_name AS supervisor
FROM employees AS a
LEFT JOIN employees AS b ON a.supervisor_id = b.employee_id;

Querying: Combining

Combine the results of multiple SELECT statements.

-- UNION: Combines results and removes duplicates
SELECT first_name, last_name FROM employees
UNION
SELECT first_name, last_name FROM customers;

-- UNION ALL: Combines results and keeps all duplicates
SELECT first_name, last_name FROM employees
UNION ALL
SELECT first_name, last_name FROM customers;

-- Sub-query: Use a query result as a condition
SELECT * FROM employees
WHERE hourly_pay > (SELECT AVG(hourly_pay) FROM employees);

-- Common Table Expression (CTE): A temporary, named result set
WITH highest_payers AS (
    SELECT * FROM employees WHERE hourly_pay > 50
)
SELECT * FROM highest_payers WHERE joined_date < '2024-01-01';

Database Objects

Reusable SQL components.

Views

A virtual table based on a SELECT query.

-- Create a read-only view
CREATE VIEW v_high_earners AS
SELECT employee_id, first_name, hourly_pay
FROM employees
WHERE hourly_pay > 30;

-- Query the view like a table
SELECT * FROM v_high_earners;

Indexes

Speed up data retrieval on frequently queried columns.

-- Create an index
CREATE INDEX idx_employees_last_name
ON employees(last_name);

-- Create a multi-column index
CREATE INDEX idx_employees_name
ON employees(last_name, first_name);

-- Drop an index
DROP INDEX idx_employees_last_name;

Stored Functions

Reusable blocks of code. In Postgres, these are typically functions that return a value or a table.

-- Create a function in the plpgsql language
CREATE OR REPLACE FUNCTION find_employee_by_id(id INT)
RETURNS SETOF employees AS $$
BEGIN
    RETURN QUERY
    SELECT * FROM employees WHERE employee_id = id;
END;
$$ LANGUAGE plpgsql;

-- Call the function
SELECT * FROM find_employee_by_id(1);

Triggers

A function that automatically runs when an event (INSERT, UPDATE, DELETE) occurs on a table.

This is a two-step process:

-- 1. Create the trigger FUNCTION
CREATE OR REPLACE FUNCTION log_last_update()
RETURNS TRIGGER AS $$
BEGIN
    -- NEW refers to the row being inserted or updated
    NEW.updated_at = NOW();
    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

-- 2. Bind the function to a table with a TRIGGER
CREATE TRIGGER trg_employees_update
BEFORE UPDATE ON employees
FOR EACH ROW
EXECUTE FUNCTION log_last_update();

Advanced Features

JSONB

Store and query JSON data efficiently.

CREATE TABLE products (id SERIAL, data JSONB);
INSERT INTO products (data)
VALUES ('{"name": "Coffee", "tags": ["hot", "drink"]}');

-- Query a JSON key (->> returns as text)
SELECT * FROM products WHERE data->>'name' = 'Coffee';

-- Check if a JSON array contains a value
SELECT * FROM products WHERE data @> '{"tags": ["hot"]}';

Window Functions

Perform aggregate calculations over a โ€œwindowโ€ of rows without collapsing them.

-- Get each employee's salary AND the average salary for their job
SELECT
    first_name,
    job,
    hourly_pay,
    AVG(hourly_pay) OVER (PARTITION BY job) AS avg_job_pay,
    RANK() OVER (ORDER BY hourly_pay DESC) AS pay_rank
FROM employees;

Web_App

Linux Server Setup & MERN App Deployment

These are the steps to setup an Ubuntu server from scratch and deploy a MERN app with the PM2 process manager and Nginx. We are using Linode, but you could just as well use a different cloud provider or your own machine or VM.

Create an account at Linode

Click on Create Linode

Choose your server options (OS, region, etc)

SSH Keys

You will see on the setup page an area to add an SSH key.

There are a few ways that you can log into your server. You can use passwords, however, if you want to be more secure, I would suggest setting up SSH keys and then disabling passwords. That way you can only log in to your server from a PC that has the correct keys setup.

I am going to show you how to setup authentication with SSH, but if you want to just use a password, you can skip most of this stuff.

You need to generate an SSH key on your local machine to login to your server remotely. Open your terminal and type

ssh-keygen

By default, it will create your public and private key files in the .ssh directory on your local machine and name them id_rsa and id_rsa.pub. You can change this if you want, just make sure when it asks, you put the entire path to the key as well as the filename. I am using id_rsa_linode

Once you do that, you need to copy the public key. You can use the cat command and then copy the key

cat ~/.ssh/id_rsa_linode.pub

Copy the key. It will look something like this:

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEwMkP0KHX19q2dM/9pB9dpB2B/FwdeP4egXCgdEOraJuqGvaylKgbu7XDFinP6ByqJQg/w8vRV0CsFXrnr+Lh51fKv8ZPvV/yRIMjxKzNn/0+asatkjrkOwT3f3ipbzfS0bsqfWTHivZ7UNMrOHaaSezxvJpPGbW3aoTCFSA/sUUUSiWZ65v7I/tFkXE0XH+kSDFbLUDDNS1EzofWZFRcdSFbC3zrGsQHN3jcit6ba7bACQYixxFCgVB0mZO9SOgFHC64PEnZh5hJ8h8AqIjf5hDF9qFdz2jFEe/4aQmKQAD3xAPKTXDLLngV/2yFF0iWpnJ9MZ/mJoLVzhY2pfkKgnt/SUe/Hn1+jhX4wrz7wTDV4xAe35pmnajFjDppJApty+JOzKf3ifr4lNeZ5A99t9Pu0294BhYxm7/mKXiWPsevX9oSZxSJmQUtqWWz/KBVoVjlTRgAgLYbKCNBzmw7+qdRxoxxscQCQrCpJMlat56vxK8cjqiESvduUu78HHE= trave@ASUS

Now paste that into the Linode.com textarea and name it (eg.My PC)

At some point, you will be asked to enter a root password for your server as well.

Connecting as Root

Finish the setup and then you will be taken to your dashboard. The status will probably say Provisioning. Wait until it says Running and then open your local machineโ€™s terminal and connect as root. Of course you want to use your own serverโ€™s IP address

ssh root@69.164.222.31

At this point, passwords are enabled, so you will be asked for your root password.

If you authenticate and login, you should see a welcome message and your prompt should now say root@localhost:~#. This is your remote server

I usually suggest updating and upgrading your packages

sudo apt update
sudo apt upgrade

Create a new user

Right now you are logged in as root and it is a good idea to create a new account. Using the root account can be a security risk.

You can check your current user with the command:

whoami

It will say root right noe.

Letโ€™s add a new user. I am going to call my user brad

adduser brad

Just hit enter through all the questions. You will be asked for a use password as well.

You can use the following command to see the user info including the groups it belongs to

id brad

Now, letโ€™s add this user to the โ€œsudoโ€ group, which will give them root privileges.

usermod -aG sudo brad

Now if you run the following command, you should see sudo

id brad

Add SSH keys for new account

If you are using SSH, you will want to setup SSH keys for the new account. We do this by adding it to a file called authorized_keys in the users directory.

Go to the new users home directory

cd /home/brad

Create a .ssh directory and go into it

mkdir .ssh
cd .ssh

Create a new file called authorized_keys

touch authorized_keys

Now you want to put your public key in that file. You can open it with a simpl text editor called nano

sudo nano authorized_keys

Now you can paste your key in here. Just repeat the step above where we ran cat and then the location of your public key. IMPORTANT: Make sure you open a new terminal for this that is not logged into your server.

Now paste the key in the file and hi ctrl or cmd+X then hit Y to save and hit enter again

Disabling passwords

This is an extra security step. Like I said earlier, we can disable passwords so that only your local machine with the correct SSH keys can login.

Open the following file on your server

sudo nano /etc/ssh/sshd_config

Look for where it says

PasswordAuthentication Yes

Remove the # if there is one and change the Yes to No

If you want to disable root login all together you could change permitRootLogin to no as well. Be sure to remove the # sign becayse that comments the line out.

Save the file by exiting (ctrl+x) and hit Y to save.

Now you need to reset the sshd service

sudo systemctl restart sshd

Now you can logout by just typing logout

Try logging back in with your user (Use your username and serverโ€™s IP)

ssh brad@69.164.222.31

If you get a message that says โ€œPublick key deniedโ€ or something like that, run the following commands:

eval `ssh-agent -s`
ssh-add ~/.ssh/id_rsa_linode     # replace this with whatever you called your key file

try logging in again and you should see the welcome message and not have to type in any password.

Node.js setup

Now that we have provisioned our server and we have a user setup with SSH keys, itโ€™s time to start setting up our app environment. Letโ€™s start by installing Node.js

We can install Node.js with curl using the following commands

curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
sudo apt-get install -y nodejs

# Check to see if node was installed
node --version
npm --version

Get files on the server

We want to get our application files onto the server. We will use Git for this. I am using the goal setter app from my MERN stack series on YouTube

On your SERVER, go to where you want the app to live and clone the repo you want to deply from GitHub (or where ever else)

Here is the repo I will be using. Feel free to deploy the same app: https://github.com/bradtraversy/mern-tutorial

mkdir sites
cd sites
git clone https://github.com/bradtraversy/mern-tutorial.git

Now I should have a folder called mern-tutorial with all of my files and folders.

App setup

There are a few things that we need to do including setting up the .ENV file, installing dependencies and building our static assets for React.

.env file

With this particular application, I create a .envexample file because I did not want to push the actual .env file to GitHub. So you need to first rename that .envexample:

mv .envexample .env

# To check
ls -a

Now we need to edit that file

sudo nano .env

Change the NODE_ENV to โ€œproductionโ€ and change the MONGO_URI to your own. You can create a mongodb Atlas database here

Exit with saving.

Dependencies & Build

We need to install the server dependencies. This should be run from the root of the mern-tutorial folder. NOT the backend folder.

npm install

Install frontend deps:

cd frontend
npm install

We need to build our static assets as well. Do this from the frontend folder

npm run build

Run the app

Now we should be able to run the app like we do on our local machine. Go into the root and run

npm start

If you go to your ip and port 5000, you should see your app. In my case, I would go to

http://69.164.222.31:5000

Even though we see our app running, we are not done. We donโ€™t want to leave a terminal open with npm start. We also donโ€™t want to have to go to port 5000. So letโ€™s fix that.

Stop the app from running with ctrl+C

PM2 Setup

PM2 is a production process manager fro Node.js. It allows us to keep Node apps running without having to have terminal open with npm start, etc like we do for development.

Letโ€™s first install PM2 globally with NPM

sudo npm install -g pm2

Run with PM2

pm2 start backend/server.js   # or whatever your entry file is

Now if you go back to your server IP and port 5000, you will see it running. You could even close your terminal and it will still be running

There are other pm2 commands for various tasks as well that are pretty self explanatory:

  • pm2 show app
  • pm2 status
  • pm2 restart app
  • pm2 stop app
  • pm2 logs (Show log stream)
  • pm2 flush (Clear logs)

Firewall Setup

Obviously we donโ€™t want users to have to enter a port of 5000 or anything else. We are going to solve that by using a server called NGINX. Before we set that up, lets setup a firewall so that people can not directly access any port except ports for ssh, http and https

The firewall we are using is called UFW. Letโ€™s enable it.

  sudo ufw enable

You will notice now if you go to the site using :5000, it will not work. That is because we setup a firewall to block all ports.

You can check the status of the firewall with

sudo ufw status

Now letโ€™s open the ports that we need which are 22, 80 and 443

sudo ufw allow ssh (Port 22)
sudo ufw allow http (Port 80)
sudo ufw allow https (Port 443)

Setup NGINX

Now we need to install NGINX to serve our app on port 80, which is the http port

sudo apt install nginx

If you visit your IP address with no port number, you will see a Welcome to nginx! page.

Now we need to configure a proxy for our MERN app.

Open the following config file

sudo nano /etc/nginx/sites-available/default

Find the location / area and replace with this

location / {
        proxy_pass http://localhost:5000;    # or which other port your app runs on
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }

Above that, you can also put the domain that you plan on using:

server_name yourdomain.com www.yourdomain.com;

Save and close the file

You can check your nginx configuration with the following command

sudo nginx -t

Now restart the NGINX service:

sudo service nginx restart

Now you should see your app when you go to your IP address in the browser.

Domain Name

You probably donโ€™t want to use your IP address to access your app in the browser. So letโ€™s go over setting your domain with a Linode.

You need to register your domain. It doesnโ€™t matter who you use for a registrar. I use Namecheap, but you could use Godaddy, Google Domains or anyone else.

You need to change the nameservers with your Domain registrar. The process can vary depending on who you use. With Namecheap, the option is right on the details page.

You want to add the following nameservers:

  • ns1.linode.com
  • ns2.linode.com
  • ns3.linode.com
  • ns4.linode.com
  • ns5.linode.com

Technically this could take up to 48 hours, but it almost never takes that long. In my own experience, it is usually 30 - 90 minutes.

Set your domain in Linode

Go to your dashboard and select Domains and then Create Domain

Add in your domain name and link to the Linode with your app, then submit the form.

Now you will see some info like SOA Record, NS Record, MX Record, etc. There are A records already added that link to your IP address, so you donโ€™t have to worry about that. If you wanted to add a subdomain, you could create an A record here for that.

Like I said, it may take a few hours, but you should be all set. You have now deployed your application.

if you want to make changes to your app, just push to github and run a git pull on your server. There are other tools to help automate your deployments, but I will go over that another time.

Set Up SSL

You can purchase an SSL and set it with your domain registrar or you can use Letโ€™s Encrypt and set one up for free using the following commands:

sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install python-certbot-nginx
sudo certbot --nginx -d yourdomain.com -d www.yourdomain.com

# Only valid for 90 days, test the renewal process with
certbot renew --dry-run

Comprehensive Guide: Docker, NGINX, and Production Node.js Deployment

This document provides a detailed, two-part guide: first, on setting up a basic NGINX web server using Docker for serving static files, and second, on deploying a production-ready Node.js application using NGINX as a reverse proxy with SSL security.


Part 1: Setting Up NGINX to Serve Static Files

This section focuses on containerization, basic package management, and NGINX configuration to serve a simple HTML and CSS website.

1. Docker Environment Setup

Docker is a platform used to develop, ship, and run applications in isolated environments called containers. Weโ€™ll use an Ubuntu container as our lightweight server environment.

Pulling and Running the Container

We use the docker run command to create and start the container, mapping a port on your host machine to the containerโ€™s internal web server port.

CommandPurpose
docker pull ubuntuFetches the latest Ubuntu OS image from Docker Hub, the default public registry.
docker run -it -p 9090:80 ubuntuRuns a new container from the ubuntu image.
-itAllocates an interactive terminal (i) and keeps STDIN open (t), allowing you to interact with the containerโ€™s shell.
-p 9090:80Port mapping: Forwards traffic from the host machineโ€™s port 9090 to the containerโ€™s internal port 80 (where NGINX will listen).

2. Installing Packages and Starting NGINX

Once inside the containerโ€™s shell, we install the necessary tools.

Installation Commands

# Update package lists and upgrade installed packages
apt update && apt upgrade

# Install NGINX (the web server) and Neovim (a powerful text editor)
apt install nginx neovim

Verifying and Starting the Web Server

CommandPurpose
nginx -vVerification: Confirms NGINX installed correctly and displays the version.
nginxExecution: Starts the NGINX web server process. By default, it listens for HTTP traffic on port 80 within the container.

โš ๏ธ Common Mistake: Missing the -it flag If you omit the -it when running the container, the container will immediately exit because it has no foreground process to run. Solution: Use docker run -it ... or use docker start [container_id] and docker attach [container_id] if itโ€™s already created.


3. NGINX Configuration for Static Files

The primary NGINX configuration file is located at /etc/nginx/nginx.conf. We will modify this file to serve our websiteโ€™s static content.

Configuration Workflow

  1. Navigate: cd /etc/nginx
  2. Backup: mv nginx.conf nginx.backup (Preserves the default configuration)
  3. Create/Edit: nvim nginx.conf
  4. Reload: nginx -s reload (Applies the new configuration without stopping the server)

Creating the Static Content

We must create the website files before referencing them in the NGINX configuration.

# Create a root directory for the website inside /etc/nginx
mkdir MyWebSite

# Create the essential files
touch MyWebSite/index.html
touch MyWebSite/style.css

Sample Website Files

MyWebSite/index.html

<html>
  <head>
    <title>Ahmed X Nginx</title>
    <link rel="stylesheet" href="style.css" />
  </head>
  <body>
    <h1>Hello From NGINX</h1>
    <p>This is a simple NGINX WebPage</p>
  </body>
</html>

MyWebSite/style.css

body {
  background-color: black;
  color: white;
}

Final NGINX Static File Configuration (nginx.conf)

This configuration tells NGINX where to find the files and how to handle file types.

events {
    # The events block handles how NGINX manages connections (e.g., number of worker processes).
}

http {
    # The http block contains server configurations.

    # ๐Ÿ”‘ MIME Types: Defines file extensions and their corresponding content types (crucial for browsers)
    types {
        text/css css;
        text/html html;
    }

    server {
        # The server block defines a virtual host.
        listen 80;            # Listen for HTTP requests on the container's port 80.
        server_name _;        # Wildcard: Matches requests for any domain name.

        # ๐ŸŽฏ Root Directive: Defines the base directory for file lookups.
        root /etc/nginx/MyWebSite;

        # When a request comes in (e.g., http://host:9090/), NGINX will look for
        # index.html inside the directory defined by the 'root' directive.
    }
}

Testing: After reloading NGINX (nginx -s reload), you should be able to access the website by pointing your host machineโ€™s browser to http://localhost:9090.


Part 2: Production Deployment of a Node.js Application

This section details using NGINX as a reverse proxy to deploy a Node.js application, including process management, firewall setup, and SSL encryption.

4. Application and Infrastructure Setup

In a production environment, we deploy the Node.js application on a high, non-standard port (e.g., 5173) and use NGINX to handle the public-facing traffic on the standard port (80/443).

Installing Required Tools

The installation command includes all necessary components for a robust deployment.

apt install git nvim nginx tmux nodejs ufw python3-certbot-nginx
ToolPurpose
nodejsThe runtime environment for the application.
gitFor cloning the project source code.
tmuxA terminal multiplexer for managing multiple sessions (useful for running background tasks).
ufwThe Uncomplicated Firewall, used to secure the server.
python3-certbot-nginxThe tool for obtaining and configuring SSL/TLS certificates from Letโ€™s Encrypt.

Cloning and Installing the Project

# Clone the repository containing the Node.js project
git clone https://github.com/akibahmed229/Java-Employee_Management-System-Website.git

# Navigate into the project folder
cd Java-Employee_Management-System-Website

# Install dependencies defined in package.json
npm install

Process Management and Firewall

We use PM2 (Process Manager 2) to ensure the Node.js application runs continuously and automatically restarts if it crashes.

  1. Install PM2 globally:

    sudo npm i pm2 -g
    
  2. Start the application:

    pm2 start index.js --name "myapp"
    # Note: Using 'index.js' is more explicit than 'index'
    
  3. Enable Firewall:

    # Enables the firewall (WARNING: This will block all unapproved traffic)
    ufw enable
    
    # Explicitly allow inbound HTTP traffic on standard web port 80
    sudo ufw allow 'Nginx HTTP' # Or sudo ufw allow 80
    

5. Configuring NGINX as a Reverse Proxy

A reverse proxy sits in front of the application server, accepting client requests and forwarding them to the application. This setup centralizes security (SSL), load balancing, and static file serving, leaving the Node.js app to focus purely on business logic.

Reverse Proxy Configuration (/etc/nginx/nginx.conf)

events { }

http {
    server {
        # Listen for connections on standard HTTP port 80 (IPv4 and IPv6)
        listen 80 default_server;
        listen [::]:80 default_server;

        server_name yourdomain.com www.yourdomain.com; # IMPORTANT: Replace with your actual domain

        location / {
            # ๐ŸŽฏ The core reverse proxy directive: Forward requests to the Node.js app running locally on port 5173
            proxy_pass http://localhost:5173;

            # ๐Ÿค Essential Proxy Headers for correct communication
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade; # Required for WebSockets
            proxy_set_header Connection 'upgrade';  # Required for WebSockets
            proxy_set_header Host $host;            # Passes the original domain name to the backend app
            proxy_cache_bypass $http_upgrade;       # Ensures WebSocket requests bypass any proxy cache
        }

        # NOTE: The original 'root /var/www/html;' and 'index ...' directives are typically
        # removed or placed in a separate location block when using a reverse proxy for the root location.
    }
}

QOL Enhancement: Using Multiple Config Files In production, itโ€™s better practice to create a dedicated configuration file for your site in /etc/nginx/sites-available/yourdomain.conf and create a symbolic link to /etc/nginx/sites-enabled/. This avoids cluttering the main nginx.conf and makes managing multiple sites easier.


6. Securing with SSL/TLS (HTTPS)

SSL/TLS (Secure Sockets Layer/Transport Layer Security) encrypts communication between the userโ€™s browser and the server, creating HTTPS. We use Certbot with the Letโ€™s Encrypt service to automate this process.

Installing the Certificate

The certbot command automatically edits the NGINX configuration to redirect HTTP (port 80) traffic to HTTPS (port 443) and adds the necessary certificate files.

# This command automatically obtains a certificate for your domain and configures NGINX
certbot --nginx -d yourdomain.com -d www.yourdomain.com

Testing Automated Renewal

Letโ€™s Encrypt certificates are only valid for 90 days, so automated renewal is essential.

# Performs a dry run to test the renewal process without actually renewing
certbot renew --dry-run

Best Practice: Port Management After enabling SSL, confirm your firewall (ufw) is allowing HTTPS traffic on port 443: sudo ufw allow 'Nginx Full'.


Advanced Techniques: Optimizing a Dockerized NGINX/Node.js Stack

This section explores advanced concepts for performance, security, and maintainability in your deployed environment.

7. Performance and Hardening with NGINX

A. Caching Static Assets

NGINX can significantly improve page load times by caching static files like images, CSS, and JavaScript.

Advanced Configuration Snippet:

location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
    # Match common static file extensions
    expires 30d; # Tell the client's browser to cache these files for 30 days
    add_header Pragma "public";
    add_header Cache-Control "public, must-revalidate, proxy-revalidate";

    # Ensure NGINX serves these files directly (important when using a root directive)
    root /path/to/static/assets;

    # โš ๏ธ Use a separate location block for caching, not the main proxy_pass block
}

B. Rate Limiting for Security

Rate limiting prevents abuse and Denial of Service (DoS) attacks by restricting the number of requests a single client can make over a period of time.

# 1. Define the limit zone in the http block
# 'mylimit' is the zone name, 1m is the size (1MB), and 5r/s is 5 requests per second
limit_req_zone $binary_remote_addr zone=mylimit:1m rate=5r/s;

server {
    # 2. Apply the limit in the server or location block
    location /login/ {
        # Burst allows a short burst of requests above the limit before throttling.
        limit_req zone=mylimit burst=10 nodelay;
        proxy_pass http://localhost:5173;
    }
}

8. Docker Best Practices and Automation

A. Using Multi-Stage Builds

When creating a production Docker image for a Node.js application, using a multi-stage build dramatically reduces the final image size by discarding build-time dependencies.

Conceptual Dockerfile Snippet:

# Stage 1: Build Stage (Uses a heavy image for building)
FROM node:20-slim as builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build # Assuming a build script exists

# Stage 2: Production Stage (Uses a tiny image for running)
FROM node:20-slim
WORKDIR /app
# Only copy the essential files from the builder stage
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/index.js ./ # Or whatever your entry file is
CMD [ "pm2-runtime", "start", "index.js" ]

B. PM2 Ecosystem File

Instead of managing startup via the command line, use a PM2 Ecosystem file (ecosystem.config.js) to standardize configuration, logging, and environment variables.

Example:

module.exports = {
  apps: [
    {
      name: "node-app-prod",
      script: "./index.js",
      instances: "max", // Run on all available CPU cores
      exec_mode: "cluster",
      env: {
        NODE_ENV: "production",
        PORT: 5173,
      },
    },
  ],
};

Start command: pm2 start ecosystem.config.js

9. Troubleshooting and Diagnostics

A. Checking NGINX Configuration Errors

Before reloading NGINX, always check the configuration syntax to prevent downtime.

nginx -t
# Output should be: "syntax is ok" and "test is successful"

B. Diagnosing PM2/Node.js Issues

If your application isnโ€™t responding through NGINX, check the logs and status of your PM2 process.

CommandPurpose
pm2 statusShows the current running status, uptime, and process ID.
pm2 logs myappStreams the applicationโ€™s standard output and error logs in real-time.
pm2 monitOpens a real-time terminal dashboard to monitor CPU, Memory, and logs.

Distubation

NixOS

NixOS Command Cheatsheet

A collection of useful Nix and NixOS commands for system management.


System & Store Maintenance

  • Verify & Repair Store: Checks the integrity of the Nix store and repairs any issues. Use this if you suspect corruption.

    sudo nix-store --repair --verify --check-contents
    
  • Garbage Collection: Removes all unused packages from the Nix store to free up space.

    sudo nix-collect-garbage -d
    sudo nix-collect-garbage --delete-older-than 7d
    sudo nix store gc
    

Generation Management

  • List System Generations: Shows all past system configurations (generations).

    sudo nix-env --list-generations --profile /nix/var/nix/profiles/system
    
  • Switch Generation (No Reboot): Allows you to roll back to a previous system configuration without restarting.

    1. List generations:

      nix-env --list-generations -p /nix/var/nix/profiles/system
      
    2. Switch to generation:

      sudo nix-env --switch-generation <number> -p /nix/var/nix/profiles/system
      
    3. Activate configuration:

      sudo /nix/var/nix/profiles/system/bin/switch-to-configuration switch
      
    4. Set Booted Generation as Default: If you boot into an older generation, run this to make it the default.

      /run/current-system/bin/switch-to-configuration boot
      

System Rebuilding

  • Rebuild without Cache: Forces a rebuild without using cached tarballs.
    sudo nixos-rebuild switch --flake .#host --option tarball-ttl 0
    
  • Rebuild on a Remote Machine: Uses sudo on a remote machine during activation.
    nixos-rebuild --use-remote-sudo switch --flake .#host
    

Flake Management

  • Update Flake Inputs: Updates flake dependencies and commits to flake.lock.

    nix flake update --commit-lock-file --accept-flake-config
    
  • Update Flake Inputs: Provide the git auth token to Updates flake dependencies and commits to flake.lock. (fix api rate limit)

    nix flake update --option access-tokens "github.com=$(gh auth token)"
    
  • Inspect Flake Metadata: Shows flake metadata in JSON format.

    nix flake metadata --json | nix run nixpkgs#jq
    

Development & Packaging

  • Prefetch URL: Downloads a file and prints its hash. Essential for packaging.

    nix-prefetch-url "https://discord.com/api/download?platform=linux&format=tar.gz"
    
  • Evaluate a Nix File: Tests a Nix expression from a file.

    nix-eval --file default.nix
    

Nixpkgs Legacy: Using Old OpenSSH with DSS

Sometimes you need to connect to legacy SSH servers that only support ssh-dss (DSA) keys. Modern Nixpkgs disables DSS by default, but you can pin an older package.

1. Create a Nix file for legacy OpenSSH

legacy-ssh.nix:

{ pkgs ? import <nixpkgs> {} }:

let
  # Pin an older nixpkgs commit with DSS support
  legacyPkgs = import (builtins.fetchTarball {
    url = "https://github.com/NixOS/nixpkgs/archive/2f6ef9aa6a7eecea9ff7e185ca40855f36597327.tar.gz";
    sha256 = "0jcs9r4q57xgnbrc76davqy10b1xph15qlkvyw1y0vk5xw5vmxfz";
  }) {};
in
  legacyPkgs.openssh

Browse older package versions: Nix Versions

2. Build the package

nix build -f legacy-ssh.nix

3. Use the legacy ssh binary

./result/bin/ssh -F /dev/null \
  -o HostKeyAlgorithms=ssh-dss \
  -o KexAlgorithms=diffie-hellman-group1-sha1 \
  -o PreferredAuthentications=password,keyboard-interactive \
  admin@192.168.0.1 -vvv

Explanation of key options:

  • -F /dev/null โ†’ Ignore default SSH config.
  • HostKeyAlgorithms=ssh-dss โ†’ Allow DSS host keys.
  • KexAlgorithms=diffie-hellman-group1-sha1 โ†’ Use legacy key exchange.
  • PreferredAuthentications=password,keyboard-interactive โ†’ Only use password or interactive login.

NixOS with LUKS, LVM, and Btrfs: A Comprehensive Guide

๐Ÿงญ Table of Contents

  1. ๐Ÿ’ฟ NixOS Installation: Manual Partitioning with LUKS + LVM + Btrfs
  2. โž• Extending an Encrypted LVM Volume with a New Disk
  3. โž– Removing an Encrypted Disk from an LVM Volume
  4. ๐Ÿ“š LUKS Command Reference
  5. ๐Ÿ“š LVM Command Reference
  6. ๐Ÿ“š Btrfs Command Reference
  7. ๐Ÿ”ง System Recovery: Chrooting with a Live USB

1. ๐Ÿ’ฟ NixOS Installation: Manual Partitioning with LUKS + LVM + Btrfs

This section guides you through a fresh installation of NixOS on a single disk.

Prerequisites

  1. Boot the NixOS installer.
  2. Connect to the internet.
  3. Switch to a root shell: sudo -i.
  4. Identify your target disk: lsblk.
  5. Set a variable for your device:
    export DEVICE=/dev/sda
    

Step 1. Wipe Disk and Create Partition Table

๐Ÿšจ Warning: This will destroy all data on the specified disk.

vgchange -a n root_vg # deactivate the volume group
sgdisk --zap-all ${DEVICE}

# optional, override your disk with random data, take too long, hours.
dd if=/dev/urandom of=/dev/nvme01n bs=4096 status=progress

Step 2. Create Partitions

We will create a standard 4-partition layout for a modern UEFI system.

# Partition 1: 1M BIOS Boot partition (for GRUB compatibility)
sgdisk --new=1:0:+1M --typecode=1:EF02 --change-name=1:boot ${DEVICE}

# Partition 2: 500M EFI System Partition (ESP)
sgdisk --new=2:0:+500M --typecode=2:EF00 --change-name=2:ESP ${DEVICE}

# Partition 3: 4G Swap partition
sgdisk --new=3:0:+4G --typecode=3:8200 --change-name=3:swap ${DEVICE}

# Partition 4: The rest of the disk for our encrypted data
# We use 8E00 which is the typecode for "Linux LVM"
sgdisk --new=4:0:0 --typecode=4:8E00 --change-name=4:root ${DEVICE}

Step 3. Format Unencrypted Filesystems

Format the ESP and swap partitions, giving them labels for easy mounting.

# Format the EFI partition
mkfs.vfat -n ESP ${DEVICE}p2

# Set up the swap partition
mkswap -L swap ${DEVICE}p3

Step 4. Set Up LUKS Encryption and LVM ๐Ÿ”’

This is the core of the setup. We create an encrypted container on our main partition and then build an LVM structure inside it.

# 1. Create the LUKS encrypted container on the fourth partition.
# You will be prompted to enter and confirm a strong passphrase. Remember this!
echo "Formatting the LUKS container. Please enter your encryption passphrase."
cryptsetup luksFormat -v -s 512 -h sha512 --label crypted ${DEVICE}p4

# 2. Open the LUKS container to make it accessible.
# This creates a decrypted "virtual" device at /dev/mapper/crypted.
echo "Opening the LUKS container. Please enter your passphrase."
cryptsetup open ${DEVICE}p4 crypted

# 3. Set up LVM *inside* the decrypted container.
# Initialize the physical volume (PV) on the decrypted device
pvcreate /dev/mapper/crypted

# Create the volume group (VG) named "root_vg"
vgcreate root_vg /dev/mapper/crypted

# Create the logical volume (LV) named "root" that uses all available space
lvcreate -l 100%FREE -n root root_vg

Step 5. Format the LVM Volume with Btrfs

Now, we format the LVM logical volume (not the physical partition) with Btrfs.

mkfs.btrfs -L root /dev/root_vg/root

Step 6. Create and Mount Btrfs Subvolumes

We use Btrfs subvolumes to separate parts of our system, which is standard practice for NixOS.

# 1. Mount the top-level Btrfs volume
mount /dev/root_vg/root /mnt

# 2. Create the subvolumes
btrfs subvolume create /mnt/root
btrfs subvolume create /mnt/persist
btrfs subvolume create /mnt/nix

# 3. Unmount the top-level volume
umount /mnt

# 4. Mount the root subvolume with correct options
mount -o subvol=root,compress=zstd,noatime /dev/root_vg/root /mnt

# 5. Create the directories for the other mountpoints
mkdir -p /mnt/persist
mkdir -p /mnt/nix
mkdir -p /mnt/boot

# 6. Mount the other subvolumes
mount -o subvol=persist,noatime,compress=zstd /dev/root_vg/root /mnt/persist
mount -o subvol=nix,noatime,compress=zstd /dev/root_vg/root /mnt/nix

Step 7. Mount Boot Partition and Activate Swap

Finish by mounting the ESP and activating the swap.

# Mount the boot partition
mount ${DEVICE}p2 /mnt/boot

# Activate the swap partition
swapon ${DEVICE}p3

Step 8. Generate NixOS Configuration

Finally, generate the NixOS configuration. The installer will automatically detect the LUKS and LVM setup.

nixos-generate-config --root /mnt

Your /mnt/etc/nixos/hardware-configuration.nix will be auto-generated with the correct LUKS and filesystem entries, similar to this:

# Example /etc/nixos/hardware-configuration.nix

{ config, lib, pkgs, modulesPath, ... }:

{
  imports =
    [ (modulesPath + "/profiles/qemu-guest.nix")
    ];

  boot.initrd.availableKernelModules = [ "ahci" "xhci_pci" "virtio_pci" "sr_mod" "virtio_blk" ];
  boot.initrd.kernelModules = [ "dm-snapshot" ];
  boot.kernelModules = [ "kvm-intel" ];
  boot.extraModulePackages = [ ];

  # This part is automatically added to unlock your disk at boot
  boot.initrd.luks = {
    devices."crypted" = {
      device = "/dev/disk/by-label/crypted";
      preLVM = true;
    };
  };

  # These are your Btrfs subvolumes
  fileSystems."/" =
    { device = "/dev/mapper/root_vg-root";
      fsType = "btrfs";
      options = [ "subvol=root" ];
    };

  fileSystems."/persist" =
    { device = "/dev/mapper/root_vg-root";
      fsType = "btrfs";
      options = [ "subvol=persist" ];
    };

  fileSystems."/nix" =
    { device = "/dev/mapper/root_vg-root";
      fsType = "btrfs";
      options = [ "subvol=nix" ];
    };

  # Your boot and swap partitions
  fileSystems."/boot" = {
      device = "/dev/disk/by-label/ESP";
      fsType = "vfat";
      options = [ "fmask=0022" "dmask=0022" ];
    };

  swapDevices =[ { device = "/dev/disk/by-label/swap"; } ];

  nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
}

You can now proceed with editing your configuration.nix and running nixos-install.


2. โž• Extending an Encrypted LVM Volume with a New Disk

Use this guide when youโ€™ve added a new physical disk (e.g., /dev/vdb) and want to add its encrypted space to your existing root_vg.

๐Ÿšจ Pre-flight Check: Backup

Before you begin, ensure you have a backup of any critical data.

Step 1. Partition and Label the New Disk

Weโ€™ll create a single partition on the new disk (/dev/vdb) and give it a partition label for easy identification.

# Open parted for /dev/vdb
sudo parted /dev/vdb

# Inside parted, run the following commands:
# (parted)
mklabel gpt
mkpart primary 0% 100%
name 1 crypted_ext
quit

This creates /dev/vdb1 and sets its partition name (label) to crypted_ext.

Step 2. Create and Open the LUKS Encrypted Container

Now, encrypt the new partition.

# Encrypt /dev/vdb1.
# -> IMPORTANT: Use the EXACT SAME password as your main encryption.
# -> This allows NixOS to unlock both with a single password prompt.
sudo cryptsetup luksFormat /dev/vdb1

# Open the new LUKS container so we can work with it.
sudo cryptsetup luksOpen /dev/vdb1 crypted_ext_mapper

The unlocked device is now available at /dev/mapper/crypted_ext_mapper.

Step 3. Integrate the New Encrypted Disk into LVM

Add the newly decrypted device as a Physical Volume (PV) to your existing Volume Group (VG).

# 1. Create a new Physical Volume (PV) on the unlocked container.
sudo pvcreate /dev/mapper/crypted_ext_mapper

# 2. Extend your existing 'root_vg' Volume Group with this new PV.
sudo vgextend root_vg /dev/mapper/crypted_ext_mapper

# 3. (Verification) Check your Volume Group. It should now be larger.
sudo vgs

Step 4. Extend the Logical Volume and Btrfs Filesystem

Make the new space available to your filesystem.

# 1. Extend the Logical Volume to use 100% of the new free space.
sudo lvextend -l +100%FREE /dev/mapper/root_vg-root

# 2. Resize the Btrfs filesystem to fill the newly expanded Logical Volume.
sudo btrfs filesystem resize max /

# 3. (Verification) Check your disk space.
df -h /

Step 5. Update configuration.nix

This is the most critical step. You must tell NixOS to unlock this second device at boot.

Edit your /etc/nixos/configuration.nix file and add the new device to boot.initrd.luks.

# Your configuration.nix

boot.initrd.luks = {
  devices."crypted" = {
    device = "/dev/disk/by-label/crypted"; # This is your original /dev/vda4
    preLVM = true;
  };

  # --- ADD THIS NEW BLOCK ---
  devices."crypted_ext" = {
    # Use the partition label you set in Step 1
    device = "/dev/disk/by-partlabel/crypted_ext";
    preLVM = true;
    allowDiscards = true; # Good practice for SSDs/VMs
  };
};

Note on Passwords: Because you used the same password for both LUKS devices, NixOS will ask for your password only once at boot and use it to unlock both containers.

Step 6. Rebuild and Reboot

DO NOT REBOOT until you have applied the new configuration.

# Apply your new NixOS configuration
sudo nixos-rebuild switch

# Now it is safe to reboot
sudo reboot

3. โž– Removing an Encrypted Disk from an LVM Volume

This guide covers the complex process of removing a disk (e.g., /dev/vdb) from a Volume Group when your filesystem spans multiple disks.

๐Ÿšจ WARNING: This is a high-risk operation. A mistake can lead to total data loss. Back up all critical data before proceeding. This process almost always requires booting from a Live Linux ISO because you cannot shrink a mounted root filesystem.

The Goal

Our goal is to move all data off /dev/vdb (which is part of root_vg) onto your other disk (/dev/mapper/crypted) and then remove /dev/vdb from the LVM setup.

The Problem

You cannot pvmove data off /dev/vdb because there is no free space on the other disk to move it to. You must first shrink your filesystem and logical volume to be smaller than the size of the disk you want to keep.

Example:

  • Disk 1 (/dev/mapper/crypted): 35G
  • Disk 2 (/dev/vdb): 20G
  • Total root_vg size: 55G
  • Your Goal: You must shrink your Btrfs filesystem and LV to < 35G (e.g., 34G).

Step 1. Boot from a Live Linux ISO

  1. Attach a NixOS, Ubuntu, or other Linux ISO to your VM or machine and boot from it.
  2. Open a terminal.

Step 2. Unlock Encrypted Disks and Activate LVM

# 1. Unlock your *main* encrypted partition (the one you are keeping)
# Replace /dev/vda4 with your actual partition
sudo cryptsetup luksOpen /dev/vda4 crypted

# 2. Unlock the *second* disk's encrypted partition
# (This assumes /dev/vdb is encrypted, following the guide in section 2)
sudo cryptsetup luksOpen /dev/vdb1 crypted_ext

# 3. Activate the LVM Volume Group
sudo vgchange -ay

Step 3. Resize Btrfs and LV (Offline)

This is the most critical part.

# 1. Run a filesystem check (highly recommended)
sudo btrfs check /dev/mapper/root_vg-root

# 2. Shrink the Btrfs filesystem.
# We set it to 34G, which is smaller than our 35G target disk.
sudo btrfs filesystem resize 34G /dev/mapper/root_vg-root

# 3. Shrink the Logical Volume to match.
sudo lvreduce -L 34G /dev/mapper/root_vg-root

Step 4. Reboot into Your Normal System

The offline part is done.

sudo reboot

Remove the Live ISO and boot back into your NixOS. Your system will boot up on a smaller filesystem.

Step 5. Migrate Data and Remove the Disk (Online)

Now that you are back in your system, sudo vgs should show free space in root_vg.

# 1. Load the 'dm-mirror' module, which pvmove needs
sudo modprobe dm_mirror

# 2. Move all data extents off the disk you want to remove.
# This will move data from crypted_ext to the free space on crypted.
sudo pvmove -v /dev/mapper/crypted_ext_mapper

# 3. Remove the now-empty Physical Volume from the Volume Group.
sudo vgreduce root_vg /dev/mapper/crypted_ext_mapper

# 4. Remove the LVM metadata from the device.
sudo pvremove /dev/mapper/crypted_ext_mapper

Step 6. Update configuration.nix and Clean Up

  1. Edit your /etc/nixos/configuration.nix and remove the entry for crypted_ext from boot.initrd.luks.
  2. Rebuild your system:
    sudo nixos-rebuild switch
    
  3. You can now safely close the LUKS container and reboot. The disk /dev/vdb is completely free.
    sudo cryptsetup luksClose crypted_ext_mapper
    sudo reboot
    

4. ๐Ÿ“š LUKS Command Reference

Common cryptsetup commands for managing LUKS devices.

  • Format a new LUKS container:

    # --label is recommended for use in /dev/disk/by-label/
    cryptsetup luksFormat --label crypted /dev/sda4
    
  • Open (decrypt) a container:

    # This creates a device at /dev/mapper/my_decrypted_volume
    cryptsetup luksOpen /dev/sda4 my_decrypted_volume
    
  • Close (lock) a container:

    cryptsetup luksClose my_decrypted_volume
    
  • Add a new password (key slot):

    # You will be prompted for an *existing* password first.
    cryptsetup luksAddKey /dev/sda4
    
  • Remove a password:

    # You will be prompted for the password you wish to remove.
    cryptsetup luksRemoveKey /dev/sda4
    
  • View header information (and key slots):

    cryptsetup luksDump /dev/sda4
    
  • Resize an online LUKS container: (Useful if you resize the underlying partition).

    cryptsetup resize my_decrypted_volume
    

5. ๐Ÿ“š LVM Command Reference

Common commands for managing LVM.

Physical Volume (PV) - The Disks

  • Initialize a disk for LVM:

    pvcreate /dev/mapper/crypted
    
  • List physical volumes:

    pvs
    pvdisplay
    
  • Move data from one PV to another (within the same VG):

    # Moves all data *off* /dev/sdb1
    pvmove /dev/sdb1
    
    # Moves data from /dev/sdb1 *to* /dev/sdc1
    pvmove /dev/sdb1 /dev/sdc1
    
  • Remove LVM metadata from a disk:

    # Only run this *after* removing the PV from its VG.
    pvremove /dev/sdb1
    

Volume Group (VG) - The Pool of Disks

  • Create a new VG:
    # Creates a VG named "my_vg" using two disks
    vgcreate my_vg /dev/sdb1 /dev/sdc1
    
  • List volume groups:
    vgs
    vgdisplay
    
  • Add a disk (PV) to an existing VG:
    vgextend my_vg /dev/sdd1
    
  • Remove a disk (PV) from a VG:
    # The PV must be empty (use pvmove first).
    vgreduce my_vg /dev/sdb1
    
  • Remove a VG:
    # Make sure all LVs are removed first.
    vgremove my_vg
    

Logical Volume (LV) - The โ€œPartitionsโ€

  • Create a new LV:

    # Create a 50G LV named "my_lv" from the "my_vg" pool
    lvcreate -L 50G -n my_lv my_vg
    
    # Create an LV using all remaining free space
    lvcreate -l 100%FREE -n my_other_lv my_vg
    
  • List logical volumes:

    lvs
    lvdisplay
    
  • Extend an LV (and its filesystem):

    # Extend the LV to be 100G in total
    lvresize -L 100G /dev/my_vg/my_lv
    
    # Add 20G to the LV's current size
    lvresize -L +20G /dev/my_vg/my_lv
    
    # Extend the LV to use all free space in the VG
    lvextend -l +100%FREE /dev/my_vg/my_lv
    
    # --- IMPORTANT ---
    # After extending, you must resize the filesystem inside it.
    # For ext4:
    resize2fs /dev/my_vg/my_lv
    # For btrfs:
    btrfs filesystem resize max /path/to/mountpoint
    
  • Reduce an LV (and its filesystem): ๐Ÿšจ DANGEROUS! You must shrink the filesystem first.

    # 1. Shrink the filesystem (e.g., ext4, UNMOUNTED)
    resize2fs /dev/my_vg/my_lv 40G
    
    # 2. Shrink the LV to match
    lvreduce -L 40G /dev/my_vg/my_lv
    
    # For Btrfs, you can often do it online:
    # 1. Shrink Btrfs
    btrfs filesystem resize 40G /path/to/mountpoint
    # 2. Shrink LV
    lvreduce -L 40G /dev/my_vg/my_lv
    
  • Remove an LV:

    # Make sure it's unmounted first.
    lvremove /dev/my_vg/my_lv
    

6. ๐Ÿ“š Btrfs Command Reference

Common commands for managing Btrfs filesystems and subvolumes.

  • Format a device:

    # -L sets the label
    mkfs.btrfs -L root /dev/my_vg/my_lv
    
  • Resize a filesystem:

    # Grow to fill the maximum available space (after an lvextend)
    btrfs filesystem resize max /path/to/mountpoint
    
    # Set to a specific size (e.g., 50G)
    btrfs filesystem resize 50G /path/to/mountpoint
    
    # Shrink by 10G
    btrfs filesystem resize -10G /path/to/mountpoint
    
  • Show filesystem usage:

    # Btrfs-aware 'df'
    btrfs filesystem df /path/to/mountpoint
    
  • Create a subvolume:

    # Mount the top-level (ID 5) volume first
    mount /dev/my_vg/my_lv /mnt
    
    # Create subvolumes
    btrfs subvolume create /mnt/root
    btrfs subvolume create /mnt/nix
    
    umount /mnt
    
  • List subvolumes:

    btrfs subvolume list /path/to/mountpoint
    
  • Delete a subvolume:

    # Deleting a subvolume is recursive and instant
    btrfs subvolume delete /mnt/nix
    
  • Create a snapshot:

    # Create a read-only snapshot of 'root'
    btrfs subvolume snapshot -r /mnt/root /mnt/root-snapshot
    
    # Create a writable snapshot (a clone)
    btrfs subvolume snapshot /mnt/root /mnt/root-clone
    
  • Check a Btrfs filesystem (unmounted):

    btrfs check /dev/my_vg/my_lv
    

7. ๐Ÿ”ง System Recovery: Chrooting with a Live USB

If your system fails to boot due to a broken configuration, a kernel panic, or a faulty GRUB, you can use a Live USB (like the NixOS installer) to chroot into your installation and fix it. The nixos-enter command is a powerful script that makes this much easier.

Prerequisites

  1. Boot from a NixOS installer ISO.
  2. Connect to the internet (if you need to download packages).
  3. Open a terminal and get a root shell: sudo -i.

Step 1. Identify and Unlock LUKS Volumes

First, find your encrypted partitions.

lsblk

You will need to identify all partitions that are part of your LVM root_vg. In the setup from this guide, there are two: the main crypted partition (e.g., /dev/vda4) and the extended one crypted_ext (e.g., /dev/vdb1).

๐Ÿšจ Important: You must unlock ALL LUKS volumes that are part of your Volume Group, otherwise LVM will fail to activate.

# Unlock the primary disk (e.g., /dev/vda4)
cryptsetup luksOpen /dev/vda4 crypted

# Unlock the extended disk (e.g., /dev/vdb1)
cryptsetup luksOpen /dev/vdb1 crypted_ext

Enter your single passphrase when prompted for each.

Step 2. Activate the LVM Volume Group

Tell LVM to scan for and activate the Volume Groups now available on the decrypted devices.

# Scan for and activate all volume groups
vgchange -ay

You should see a message that root_vg is now active.

Step 3. Mount Filesystems for nixos-enter

nixos-enter is smart, but it needs the root (/) and boot (/boot) partitions mounted at /mnt.

# 1. Mount the Btrfs root subvolume
# This is the subvolume you set for '/' in your configuration.nix
mount -o subvol=root /dev/mapper/root_vg-root /mnt

# 2. Mount the boot (ESP) partition
# This is VITAL for fixing GRUB. Find your ESP (e.g., /dev/vda2)
mkdir -p /mnt/boot
mount /dev/vda2 /mnt/boot

Step 4. Chroot into Your System

With the root and boot partitions mounted, you can now use nixos-enter. It will automatically find your /nix store and other subvolumes.

nixos-enter

Your prompt should change, and you are now โ€œinsideโ€ your broken NixOS installation as the root user.

Step 5. Perform Repairs (Inside the Chroot)

Here are common fixes for a broken system.

Scenario 1: Fix a Broken configuration.nix

This is the most common fix. You made a change, rebuilt, and now it wonโ€™t boot.

# 1. Edit your configuration to fix the typo or bad option
nano /etc/nixos/configuration.nix

# 2. Rebuild the system.
# 'nixos-rebuild switch' will build and make it the default.
nixos-rebuild switch

# If you are less confident, 'nixos-rebuild boot' will build it
# and set it as the default, but won't activate it immediately.
nixos-rebuild boot

Scenario 2: Roll Back to a Previous Generation

If you just want to undo your last build, you can roll back.

# This will build your *previous* configuration and make it the default.
nixos-rebuild boot --rollback

# You can also list all generations and switch to a specific one:
nix-env -p /nix/var/nix/profiles/system --list-generations
nix-env -p /nix/var/nix/profiles/system --switch-generation 123

Scenario 3: Manually Reinstall GRUB

If nixos-rebuild doesnโ€™t fix a โ€œno bootable deviceโ€ error, GRUB itself might be broken.

# This command reinstalls GRUB to your EFI directory.
grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=nixos

After this, itโ€™s still a good idea to run nixos-rebuild switch to ensure GRUBโ€™s configuration file is also correct.

Step 6. Exit and Reboot

Once you are finished, exit the chroot and unmount everything.

# 1. Exit the chroot
exit

# 2. Unmount all partitions
umount -R /mnt

# 3. Reboot the system
reboot

Remove your Live USB, and your system should now boot into the fixed configuration.

Installation

Arch Linux Installation Guide

This guide provides step-by-step instructions for installing Arch Linux.

Table of Contents

  1. Keyboard Layout Setup
  2. Connecting to Wi-Fi
  3. SSH Connection to Another Device
  4. Date and Time Setup
  5. Disk Management for Installation
  6. System Installation
  7. Configuring the New Installation (arch-chroot)
  8. Edit The Mkinitcpio File For Encrypt
  9. Grub Installation
  10. Enabling Systemd Services
  11. Creating a New User
  12. Finishing the Installation
  13. Post-Installation Configuration

1. Keyboard Layout Setup

Load the keyboard layout using the following commands:

localectl
localectl list-keymaps
localectl list-keymaps | grep us
loadkeys us

Explanation:

  • localectl: Lists the current keyboard layout settings.
  • localectl list-keymaps: Lists all available keyboard layouts.
  • localectl list-keymaps | grep us: Filters the list to show only layouts containing โ€œusโ€ (United States layout).
  • loadkeys us: Sets the keyboard layout to US.

2. Connecting to Wi-Fi

Connect to a Wi-Fi network using the following commands:

iwctl
device list
station wlan0 get-networks
station wlan0 connect wifiname
ip a
ping -c 5 google.com

Explanation:

  • iwctl: Launches the interactive Wi-Fi control utility.
  • device list: Lists available network devices.
  • station wlan0 get-networks: Scans for available Wi-Fi networks.
  • station wlan0 connect wifiname: Connects to the specified Wi-Fi network (replace โ€œwifinameโ€ with the actual network name).
  • ip a: Displays the network interfaces and their IP addresses.
  • ping -c 5 google.com: Pings the Google website to test the internet connection.

3. SSH Connection to Another Device

Set a password and establish an SSH connection to another device:

passwd
ssh root@ipaddress

Explanation:

  • passwd: Sets the password for the current device (root user).
  • ssh root@ipaddress: Connects to the current device using SSH from another device (replace โ€œipaddressโ€ with the actual IP address of the current device).

4. Date and Time Setup

Set the date and time for the system:

timedatectl
timedatectl list-timezones
timedatectl list-timezones | grep Dhaka
timedatectl set-timezone Asia/Dhaka
timedatectl

Explanation:

  • timedatectl: Displays the current system time and date settings.
  • timedatectl list-timezones: Lists all available time zones.
  • timedatectl list-timezones | grep Dhaka: Filters the list to

show time zones containing โ€œDhakaโ€ (replace with your desired time zone).

  • timedatectl set-timezone Asia/Dhaka: Sets the systemโ€™s time zone to โ€œAsia/Dhakaโ€ (replace with your desired time zone).
  • timedatectl: Verifies the updated time and date settings.

5. Disk Management for Installation

Manage the disk partitions for the installation:

lsblk
ls /sys/firmware/efi/efivars
blkid /dev/vda
cfdisk
lsblk
mkfs.btrfs -f /dev/vda1
mkfs.fat -F32 /dev/vda2
blkid /dev/vda
mount /dev/vda1 /mnt
cd /mnt
btrfs subvolume create @
btrfs subvolume create @home
cd
umount /mnt
mount -o noatime,ssd,space_cache=v2,compress=zstd,discard=async,subvol=@ /dev/vda1 /mnt
mkdir /mnt/home
mount -o noatime,ssd,space_cache=v2,compress=zstd,discard=async,subvol=@home /dev/vda1 /mnt/home
mkdir -p /mnt/boot/efi
mount /dev/vda2 /mnt/boot/efi

mkdir /mnt/windows
lsblk

Explanation:

  • lsblk: Lists available block devices and their partitions.
  • ls /sys/firmware/efi/efivars: Verifies if the system is booted in UEFI mode.
  • blkid /dev/sda: Displays information about the /dev/sda drive (replace with the appropriate drive if different).
  • cfdisk : # create two pertion 1. Main file 2. efi partion

disk encryption

  • cryptsetup luksformat /dev/vda1: setup encryption

  • cryptsetup luksOpen /dev/vda1 main: open your encrypted partition

  • lsblk: Lists the updated block devices and their partitions after partitioning.

  • mkfs.btrfs -f /dev/mapper/main: Formats the System partition (/dev/vda1 or (main)) as Btrfs.

  • mkfs.fat -F32 -f /dev/vda2: Formats the EFI System partition (/dev/vda2) as FAT32.

  • blkid /dev/vda: Verifies the UUID of the formatted partition.

  • mount /dev/mapper/main /mnt: Mounts the System partition (main) to the /mnt directory.

  • cd /mnt: Changes the current directory to /mnt.

  • fat subvolume create @: Creates a Btrfs subvolume named โ€œ@โ€ for the root directory.

  • fat subvolume create @home: Creates a Btrfs subvolume named โ€œ@homeโ€ for the home directory.

  • cd: Returns to the previous directory.

  • umount /mnt: Unmounts the /mnt directory.

  • mount -o noatime,ssd,space_cache=v2,compress=zstd,discard=async,subvol=@ /dev/vda1 /mnt: Mounts the System partition (/dev/vda1) with Btrfs subvolume โ€œ@โ€, applying specified mount options.

  • mkdir /mnt/home: Creates the /mnt/home directory.

  • mount -o noatime,ssd,space_cache=v2,compress=zstd,discard=async,subvol=@home /dev/vda1 /mnt/home: Mounts the System partition (/dev/vda1) with Btrfs subvolume โ€œ@homeโ€ to the /mnt/home directory, applying specified mount options.

  • mkdir -p /mnt/boot/efi: Creates the /mnt/boot/efi directory.

  • mount /dev/vda1 /mnt/boot/efi: Mounts the EFI System partition (/dev/vda2) to the /mnt/boot/efi directory.

(Optional) For Windows partition:

  • mkdir /mnt/windows: Creates the /mnt/windows directory.
  • lsblk: Lists available block devices and their partitions to identify the Windows partition.

6. System Installation

Install the base system:

reflector --country Bangladesh --age 6 --sort rate --save /etc/pacman.d/mirrorlist
pacman -Sy
pacstrap -K /mnt base linux linux-firmware intel-ucode vim
genfstab -U /mnt >> /mnt/etc/fstab
cat /mnt/etc/fstab

Explanation:

  • reflector --country Bangladesh --age 6 --sort rate --save /etc/pacman.d/mirrorlist: Updates the mirrorlist file with the fastest mirrors in Bangladesh (replace with your desired country).
  • pacman -Sy: Synchronizes package databases.
  • pacstrap -K /mnt base linux linux-firmware intel-ucode vim: Installs essential packages (replace with any additional packages you may need).
  • genfstab -U /mnt >> /mnt/etc/fstab: Generates an fstab file based on the current disk configuration.
  • cat /mnt/etc/fstab: Displays the contents of the generated fstab file for verification.

7. Configuring the New Installation (arch-chroot)

Enter the newly installed system for configuration:

arch-chroot /mnt
ls
ln -sf /usr/share/zoneinfo/Asia/Dhaka /etc/localtime
hwclock --systohc
vim /etc/locale.gen
locale-gen
echo "LANG=en_US.UTF-8" >> /etc/locale.conf
echo "KEYMAP=us" >> /etc/vconsole.conf
vim /etc/hostname
passwd
pacman -S grub-btrfs efibootmgr networkmanager network-manager-applet dialog wpa_supplicant mtools dosfstools reflector base-devel linux-headers bluez bluez-utils cups hplip alsa-utils pipewire pipewire-alsa pipewire-pulse pipewire-jack bash-completion openssh rsync acpi acpi_call tlp sof-firmware acpid os-prober ntfs-3g

Explanation:

  • arch-chroot /mnt: Changes the root to the newly installed system (/mnt).
  • ls: Lists the contents of the root directory to verify the chroot environment.
  • ln -sf /usr/share/zoneinfo/Asia/Dhaka /etc/localtime: Creates a symbolic link from the systemโ€™s time zone file to /etc/localtime, setting the systemโ€™s time zone to โ€œAsia/Dhakaโ€ (replace with your desired time zone).
  • hwclock --systohc: Sets the hardware clock from the system clock.
  • vim /etc/locale.gen: Opens the locale.gen file for editing.
    • Uncomment the line containing โ€œen_US.UTF-8โ€ by removing the leading โ€œ#โ€ character.
  • locale-gen: Generates the locales based on the uncommented entries in locale.gen.
  • echo "LANG=en_US.UTF-8" >> /etc/locale.conf: Sets the LANG variable in locale.conf to โ€œen_US.UTF-8โ€.
  • echo "KEYMAP=us" >> /etc/vconsole.conf: Sets the KEYMAP variable in vconsole.conf to โ€œusโ€ (replace with your desired keyboard layout).
  • vim /etc/hostname: Opens the hostname file for editing.
    • Set the hostname to โ€œarchโ€ (replace with your desired hostname).
  • passwd: Sets the root password.
  • pacman -S grub efibootmgr networkmanager network-manager-applet dialog wpa_supplicant mtools dosfstools reflector base-devel linux-headers bluez bluez-utils cups hplip alsa-utils pipewire pipewire-alsa pipewire-pulse pipewire-jack bash-completion openssh rsync acpi acpi_call tlp sof-firmware acpid os-prober ntfs-3g: Installs various packages necessary for the system, including GRUB, network management tools, Bluetooth support, printer support, audio utilities, and other useful packages. Adjust the list based on your requirements.

8. Edit The Mkinitcpio File For Encrypt

  • vim /etc/mkinitcpio.conf and search for HOOKS;
  • add encrypt (before filesystems hook);
  • add atkbd to the MODULES (enables external keyboard at device decryption prompt);
  • add btrfs to the MODULES; and,
  • recreate the mkinitcpio -p linux

9. Grub Installation

Install and configure Grub:

grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=GRUB
grub-mkconfig -o /boot/grub/grub.cfg

vim /etc/default/grub
grub-mkconfig -o /boot/grub/grub.cfg
  • run blkid and obtain the UUID for the main partitin: blkid /dev/vda1
  • edit the grub config nvim /etc/default/grub
  • GRUB_CMDLINE_LINUX_DEFAULT="loglevel=3 quiet cryptdevice=UUID=d33844ad-af1b-45c7-9a5c-cf21138744b4:main root=/dev/mapper/main
  • make the grub config with grub-mkconfig -o /boot/grub/grub.cfg

Explanation:

  • grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=GRUB: Installs GRUB bootloader on the EFI System partition (/dev/vda2) with the bootloader ID โ€œGRUBโ€.
  • grub-mkconfig -o /boot/grub/grub.cfg: Generates the GRUB configuration file based on the installed operating systems.
  • vim /etc/default/grub: Opens the GRUB configuration file for editing.
    • Uncomment the line with โ€œos-proberโ€ by removing the leading โ€œ#โ€ character. This allows GRUB to detect other installed operating systems.
  • grub-mkconfig -o /boot/grub/grub.cfg: Generates the GRUB configuration file again to include the changes made.

10. Enabling Systemd Services

Enable necessary systemd services:

systemctl enable NetworkManager
systemctl enable bluetooth
systemctl enable cups.service
systemctl enable sshd
systemctl enable tlp
systemctl enable reflector.timer
systemctl enable fstrim.timer
systemctl enable acpid

Explanation:

  • systemctl enable NetworkManager: Enables the NetworkManager service to manage network connections.
  • systemctl enable bluetooth: Enables the Bluetooth service.
  • systemctl enable cups.service: Enables the CUPS (Common Unix Printing System) service for printer support.
  • systemctl enable sshd: Enables the SSH server for remote access.
  • systemctl enable tlp: Enables the TLP service for power management.
  • systemctl enable reflector.timer: Enables the Reflector timer to update the mirrorlist regularly.
  • systemctl enable fstrim.timer: Enables the fstrim timer to trim the filesystem regularly.
  • systemctl enable acpid: Enables the ACPI (Advanced Configuration and Power Interface) service.

11. Creating a New User

Create a new user and grant sudo access:

useradd -m akib
passwd akib
echo "akib ALL=(ALL) ALL" >> /etc/sudoers.d/akib
usermod -c 'Akib Ahmed' akib
exit

Explanation:

  • useradd -m akib: Creates a new user account named โ€œakibโ€ with the -m flag to create the userโ€™s home directory.
  • passwd akib: Sets the password for the newly created user โ€œakibโ€.
  • echo "akib ALL=(ALL) ALL" >> /etc/sudoers.d/akib: Grants sudo access to the user โ€œakibโ€ by adding a sudoers file for the user.
  • usermod -c 'Akib Ahmed' akib: Sets the userโ€™s full name as โ€œAkib Ahmedโ€ (replace with the desired full name).
  • exit: Exits the chroot environment.

12. Finishing the Installation

Unmount partitions and reboot the system:

umount -R /mnt
reboot

Explanation:

  • umount -R /mnt: Unmounts all the partitions mounted under /mnt.
  • reboot: Reboots the system.

Once the system reboots, you can log in with the newly created user and continue the setup process.

13. Post-Installation Configuration

After logging in with the newly created user, perform the following steps:

nmtui
  • Opens the NetworkManager Text User Interface (TUI) for managing network connections.
ip -c a
  • Displays the IP addresses and network interfaces for verification.
grub-mkconfig -o /boot/grub/grub.cfg
  • Generates the GRUB configuration file to include any changes made during the post-installation steps.
sudo pacman -S git
  • Installs the Git package.
git clone https://aur.archlinux.org/yay-bin.git
  • Clones the Yay AUR (Arch User Repository) package from the AUR repository.
ls
cd yay-bin/
makepkg -si
cd
  • Changes directory to the cloned โ€œyay-binโ€ directory, builds the package, and installs it using makepkg.
yay
  • Verifies the successful installation of Yay by running the command.
yay -S timeshift-bin timeshift-autosnap
  • Installs the Timeshift packages from the AUR using Yay.
sudo timeshift --list-devices
  • Lists the available devices for creating Timeshift snapshots.
sudo timeshift --snapshot-device /dev/vda1
  • Sets the device (/dev/vda1) to be used for creating Timeshift snapshots.
sudo timeshift --create --comments "First Backup" --tags D
  • Creates a Timeshift snapshot with a comment and assigns it the โ€œDโ€ tag for easy identification.
sudo grub-mkconfig -o /boot/grub/grub.cfg
  • Generates the GRUB configuration file again to include any changes made during the post-installation steps.

Ensure you have read and understood each step before proceeding. These additional steps cover various post-installation configurations, including network setup, package installation with Yay, and creating a Timeshift backup.

Happy Arch Linux configuration! ๐Ÿง

Gentoo Installation Guide

This comprehensive guide provides a detailed walkthrough for installing Gentoo Linux. Adjustments may be required based on your specific hardware and preferences.

Prerequisites

  • A reliable internet connection.
  • A virtual or physical machine with a target disk (e.g., /dev/vdx).

1. Check Internet Connection

Make sure your internet connection is working:

ping -c 5 www.google.com

2. Disk Partitioning

Partition your disk using fdisk:

fdisk /dev/vdx

Follow these steps in fdisk:

  • Press g for GPT partition.
  • Create partitions for boot, swap, and root using n.
  • Change partition labels using t: set boot to EFI, swap to Linux swap.

Format partitions:

mkfs.vfat -F 32 /dev/vdx1
mkswap /dev/vdx2
swapon /dev/vdx2
mkfs.ext4 /dev/vdx4

Mount the root partition:

mkdir -p /mnt/gentoo
mount /dev/sda3 /mnt/gentoo

3. Installing a Stage Tarball

Navigate to the Gentoo mirrors and download the stage3 tarball:

cd /mnt/gentoo
links https://www.gentoo.org/downloads/mirrors/
tar xpvf stage3-*.tar.xz --xattrs-include='*.*' --numeric-owner
vi /mnt/gentoo/etc/portage/make.conf 

In make.conf, specify your CPU architecture and core:

COMMON_FLAGS="-march=alderlake -O2 -pipe"
MAKEOPTS="-j8"
FEATURES="candy parallel-fetch parallel-install"
ACCEPT_LICENSE="*"

4. Installing the Gentoo Base System

Select a mirror:

mirrorselect -i -o >> /mnt/gentoo/etc/portage/make.conf

Create necessary directories:

mkdir -p /mnt/gentoo/etc/portage/repos.conf
cp /mnt/gentoo/usr/share/portage/config/repos.conf /mnt/gentoo/etc/portage/repos.conf/gentoo.conf
cp --dereference /etc/resolv.conf /mnt/gentoo/etc/

Mount essential filesystems:

mount --types proc /proc /mnt/gentoo/proc
mount --rbind /sys /mnt/gentoo/sys
mount --make-rslave /mnt/gentoo/sys
mount --rbind /dev /mnt/gentoo/dev
mount --make-rslave /mnt/gentoo/dev
mount --bind /run /mnt/gentoo/run
mount --make-slave /mnt/gentoo/run

Chroot into the new environment:

chroot /mnt/gentoo /bin/bash
source /etc/profile
export PS1="(chroot) ${PS1}"

Mount the EFI boot partition:

mkdir /efi
mount /dev/sda1 /efi

5. Configuring Portage Package Manager of Gentoo

emerge-webrsync
emerge --sync
emerge --sync --quiet
eselect profile list
eselect profile set 9
emerge --ask --verbose --update --deep --newuse @world
nano /etc/portage/make.conf

In make.conf, add USE flags:

USE="-gtk -gnome qt5 kde dvd alsa cdr"

Create a package.license directory and edit the kernel license:

mkdir /etc/portage/package.license
nvim /etc/portage/package.license/kernel

Add the following licenses:

app-arch/unrar unRAR
sys-kernel/linux-firmware @BINARY-REDISTRIBUTABLE
sys-firmware/intel-microcode intel-ucode

6. Timezone and Locale Configuration

Set your timezone:

ls /usr/share/zoneinfo
echo "Asia/Dhaka" > /etc/timezone
emerge --config sys-libs/timezone-data

Configure locales:

emerge app-editors/neovim
nvim /etc/locale.gen

Uncomment the necessary locales and set the default:

en_US ISO-8859-1
en_US.UTF-8 UTF-8

Set the locale:

locale-gen
eselect locale list
eselect locale set 6
env-update && source /etc/profile && export PS1="(chroot) ${PS1}"

7. Configuring the Kernel

emerge --ask sys-kernel/linux-firmware
emerge --ask sys-kernel/gentoo-sources
eselect kernel list
eselect kernel set 1
emerge --ask sys-apps/pciutils
cd /usr/src/linux
make menuconfig
make && make modules_install
make install

Alternatively, use Genkernel:

emerge --ask sys-kernel/linux-firmware
emerge --ask sys-kernel/genkernel
genkernel --mountboot --install all
ls /boot/vmlinu* /boot/initramfs*
ls /lib/modules 

Or, Use the binary kernel:

emerge --ask sys-kernel/gentoo-kernel
emerge --ask --autunmust-write sys-kernel/gentoo-kernel-bin
etc-update
emerge -a sys-kernel/gentoo-kernel-bin

8. Configuring Fstab and Networking

Edit fstab to reflect your disk configuration:

neovim /etc/fstab

Add entries for EFI, swap, and root partitions:

/dev/vdx1   /efi        vfat    defaults    0 2
/dev/vdx2   none        swap    sw          0 0
/dev/vdx3   /           ext4    defaults,noatime 0 1

Configure networking:

echo virt > /etc/hostname
emerge --ask --noreplace net-misc/netifrc
nvim /etc/conf.d/net

Add your network configuration:

config_enp1s0="dhcp"

Set networking to start at boot:

cd /etc/init.d
ln -s net.lo net.enp1s0
rc-update add net.enp1s0 default


9. Editing Hosts and System Configuration

Edit the hosts file:

nano /etc/hosts

Add or edit the hosts file with appropriate entries:

127.0.0.1     virt    localhost
::1           virt    localhost

Set system information:

passwd
nano /etc/conf.d/hwclock

Edit hwclock configuration:

clock="local"

10. System Logger and Additional Software

emerge --ask app-admin/sysklogd
rc-update add sysklogd default
rc-update add sshd default
nano -w /etc/inittab

Add SERIAL CONSOLES configuration:

s0:12345:respawn:/sbin/agetty 9600 ttyS0 vt100
s1:12345:respawn:/sbin/agetty 9600 ttyS1 vt100

Install additional software:

emerge --ask sys-fs/e2fsprogs
emerge --ask sys-block/io-scheduler-udev-rules
emerge --ask net-misc/dhcpcd
emerge --ask net-dialup/ppp
emerge --ask net-wireless/iw net-wireless/wpa_supplicant

11. Boot Loader

echo 'GRUB_PLATFORMS="efi-64"' >> /etc/portage/make.conf
emerge --ask --verbose sys-boot/grub
grub-install --target=x86_64-efi --efi-directory=/efi
grub-mkconfig -o /boot/grub/grub.cfg
exit
cd
umount -l /mnt/gentoo/dev{/shm,/pts,}
umount -R /mnt/gentoo
reboot

12. Adding a User for Daily Use

useradd -m -G users,wheel,audio -s /bin/bash akib
passwd akib

Removing Tarballs

rm /stage3-*.tar.*

13. Sound (PipeWire) Setup

emerge -av media-libs/libpulse
emerge --ask media-video/pipewire
emerge --ask media-video/wireplumber
usermod -aG pipewire akib
emerge --ask sys-auth/rtkit
usermod -rG audio akib

mkdir /etc/pipewire
cp /usr/share/pipewire/pipewire.conf /etc/pipewire/pipewire.conf

mkdir ~/.config/pipewire
cp /usr/share/pipewire/pipewire.conf ~/.config/pipewire/pipewire.conf

Add the following configuration to ~/.config/pipewire/pipewire.conf:

context.properties = {
    default.clock.rate = 192000
    default.clock.allowed-rates = [ 192000 48000 44100 ]  # Up to 16 can be specified
}

14. Xorg Setup

Edit /etc/portage/make.conf and add the following:

USE="X"
INPUT_DEVICES="libinput synaptics"
VIDEO_CARDS="nouveau"
VIDEO_CARDS="radeon"

Install Xorg drivers and server:

emerge --ask --verbose x11-base/xorg-drivers
emerge --ask x11-base/xorg-server
env-update
source /etc/profile

15. Setting up Display Manager (SDDM)

emerge --ask x11-misc/sddm
usermod -a -G video sddm
vim /etc/sddm.conf

Add the following lines:

[X11]
DisplayCommand=/etc/sddm/scripts/Xsetup

Create /etc/sddm/scripts/Xsetup:

mkdir -p /etc/sddm/scripts
chmod a+x /etc/sddm/scripts/Xsetup

Edit /etc/conf.d/xdm and add:

DISPLAYMANAGER="sddm"

Enable the display manager at boot:

rc-update add xdm default
emerge --ask gui-libs/display-manager-init
vim /etc/conf.d/display-manager

From there add,

		CHECKVT=7
		DISPLAYMANAGER="sddm"

after that add it to the service,

rc-update add display-manager default
rc-service display-manager start

16. Desktop Installation (KDE Plasma)

eselect profile list
eselect profile set X

Set the number according to the desktop environment you want. For KDE Plasma:

emerge --ask kde-plasma/plasma-meta
emerge konsole
emerge firefox-bin

Create ~/.xinitrc and add:

#!/bin/sh
exec dbus-launch --exit-with-session startplasma-x11

Feel free to customize this guide further based on your specific needs and preferences.

This guide is designed to provide a comprehensive and detailed walkthrough for installing Gentoo Linux. Feel free to customize it further based on your specific needs and preferences.

Tools

The Complete Linux & Bash Command-Line Guide

Master the Linux command line from first principles to advanced automation. This comprehensive guide organizes commands by what you want to accomplish, making it your go-to reference whether youโ€™re taking your first steps or optimizing complex workflows.

๐Ÿงญ Table of Contents

  1. Foundations: Understanding the Command Line
  2. Navigation: Finding Your Way Around
  3. File Operations: Creating, Moving, and Deleting
  4. Reading and Viewing Files
  5. Searching: Finding Files and Text
  6. Advanced Text Processing: Power Tools
  7. Users, Permissions, and Access Control
  8. Process and System Management
  9. Networking Essentials
  10. Archives and Compression
  11. Bash Scripting: Automating Tasks
  12. Input/Output Redirection
  13. Advanced Techniques and Power User Features
  14. Troubleshooting and Debugging

1. Foundations: Understanding the Command Line

The Anatomy of a Command

Every Linux command follows a predictable pattern that, once understood, unlocks the entire system:

command -options arguments
  • command: The program or tool youโ€™re invoking (like ls to list files)
  • -options: Modifiers that change behavior, also called flags or switches (like -l for โ€œlong formatโ€)
  • arguments: What you want the command to operate on (like /home/user)

Example breakdown:

ls -la /var/log
โ”‚  โ”‚  โ””โ”€ argument (which directory)
โ”‚  โ””โ”€โ”€โ”€โ”€ options (long format + all files)
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€ command (list contents)

The Pipe: Your Most Powerful Tool

The pipe operator | is the cornerstone of command-line productivity. It channels the output of one command directly into the input of another, letting you chain simple tools into sophisticated operations.

cat server.log | grep "ERROR" | wc -l

What happens here:

  1. cat outputs the entire log file
  2. | feeds that output to grep
  3. grep filters for lines containing โ€œERRORโ€
  4. | feeds those filtered lines to wc
  5. wc -l counts how many lines remain

Think of pipes as assembly lines: each command does one thing well, then passes its work to the next station.

Essential Survival Skills

Getting Help When Youโ€™re Stuck

man command_name

The man (manual) command is your built-in encyclopedia. Every standard command has a manual page explaining its purpose, options, and usage. Navigate with arrow keys, search with /search_term, and quit with q.

โš ๏ธ Common Mistake: Forgetting that man exists and searching online first. While web searches are valuable, man pages are authoritative, always available offline, and specific to your systemโ€™s version.

Quick reference alternatives:

  • command --help or command -h: Brief usage summary (faster than man)
  • apropos keyword: Search all manual pages for a keyword

Tab Completion: Stop Typing So Much

Press Tab at any point while typing a command, filename, or path. The shell will:

  • Complete the word if thereโ€™s only one match
  • Show you all possibilities if there are multiple matches
  • Save you from typos and help you discover available options

Pro tip: Double-tap Tab twice quickly to see all possible completions without typing anything.

Quoting Rules That Matter

Quotes arenโ€™t stylisticโ€”they fundamentally change how the shell interprets your input:

Double quotes ": The shell expands variables and substitutions

echo "Hello, $USER"  # Outputs: Hello, akib
echo "Current dir: $(pwd)"  # Outputs: Current dir: /home/akib

Single quotes ': Everything is literalโ€”no expansions occur

echo 'Hello, $USER'  # Outputs: Hello, $USER
echo 'Cost: $50'  # Outputs: Cost: $50

When to use which:

  • Use double quotes by default for strings containing variables
  • Use single quotes when you want literal text (like in sed or awk patterns)
  • Use no quotes for simple, single-word arguments

The sudo Privilege System

Linux protects critical system operations by requiring administrator privileges. Rather than logging in as the dangerous โ€œrootโ€ user, use sudo to execute individual commands with elevated rights:

sudo apt update  # Update package lists (requires admin)
sudo reboot      # Restart the system

How it works: sudo (Superuser Do) temporarily grants your command root privileges. Youโ€™ll be prompted for your password the first time, then you have a grace period (typically 15 minutes) before it asks again.

โš ๏ธ Warning: With great power comes great responsibility. sudo can break your system if misused. Always double-check commands that start with sudo.


2. Navigation: Finding Your Way Around

Understanding Where You Are

The Linux filesystem is a tree structure. Unlike Windows with its separate drives (C:, D:), everything branches from a single root /.

pwd

Print Working Directory shows your current location:

/home/akib/projects/website

Best practice: Run pwd when youโ€™re disoriented. Itโ€™s free and instant.

Seeing Whatโ€™s Around You

ls

The list command shows directory contents, but itโ€™s far more powerful with options:

ls -la

This is the command youโ€™ll use 90% of the time:

  • -l: Long format showing permissions, owner, size, date
  • -a: Show all files, including hidden ones (starting with .)

Output anatomy:

drwxr-xr-x  5 akib akib  4096 Oct 24 10:30 Documents
-rw-r--r--  1 akib akib  2048 Oct 23 15:42 notes.txt
โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚  โ”‚ โ”‚    โ”‚     โ”‚    โ”‚         โ””โ”€ filename
โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚  โ”‚ โ”‚    โ”‚     โ”‚    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ modification date
โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚  โ”‚ โ”‚    โ”‚     โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ size in bytes
โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚  โ”‚ โ”‚    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ group
โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚  โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ owner
โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ number of links
โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ permissions (others)
โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ permissions (group)
โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ permissions (owner)
โ”‚โ”‚โ”‚โ”‚โ”‚โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ execute/search
โ”‚โ”‚โ”‚โ”‚โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ write
โ”‚โ”‚โ”‚โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ read
โ”‚โ”‚โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ file type (d=directory, -=file)

Useful variations:

  • ls -lh: Human-readable sizes (2.1M instead of 2048576)
  • ls -lt: Sort by time (newest first)
  • ls -lS: Sort by Size (largest first)
  • ls -lR: Recursive (show subdirectories too)

Moving Between Directories

cd directory_name

Change Directory is your navigation command. It understands both absolute and relative paths:

Absolute paths start from root /:

cd /var/log  # Go directly to /var/log from anywhere

Relative paths start from your current location:

cd Documents          # Go into Documents subdirectory
cd ../Downloads       # Go up one level, then into Downloads
cd ../../shared/data  # Go up two levels, then down a different branch

Special shortcuts:

cd          # Go to your home directory (/home/username)
cd ~        # Same as above (~ means "home")
cd -        # Return to previous directory (like "back" button)
cd ..       # Go up one directory level
cd ../..    # Go up two levels

โš ๏ธ Common Mistake: Forgetting that cd without arguments takes you home. If you accidentally run cd and lose your place, use cd - to get back.

Advanced Navigation: The Directory Stack

For power users who jump between multiple locations:

pushd /var/log        # Save current location, jump to /var/log
pushd ~/projects      # Save /var/log, jump to ~/projects
dirs                  # View the stack
popd                  # Return to /var/log
popd                  # Return to original location

Why use this? When youโ€™re working across multiple directory trees (e.g., comparing logs in /var/log with configs in /etc while editing code in ~/projects), the directory stack is faster than repeatedly typing full paths.

Clearing the Clutter

clear

Clears your terminal screen without affecting your work. Useful when output has become overwhelming.

Keyboard shortcut: Ctrl+L does the same thing (faster than typing).


3. File Operations: Creating, Moving, and Deleting

Creating Files

touch filename.txt

Creates an empty file or updates the timestamp on an existing file. While touch seems simple, itโ€™s essential for:

  • Creating placeholder files
  • Resetting modification times
  • Testing write permissions in a directory

Why the name โ€œtouchโ€? It โ€œtouchesโ€ the file, updating its access time without modifying contents.

Creating Directories

mkdir new_folder

Make Directory creates a new folder. But the real power comes with options:

mkdir -p path/to/deeply/nested/folder

The -p (parents) flag creates all intermediate directories automatically. Without it, youโ€™d need to create each level separately:

# Without -p (tedious):
mkdir path
mkdir path/to
mkdir path/to/deeply
mkdir path/to/deeply/nested
mkdir path/to/deeply/nested/folder

# With -p (elegant):
mkdir -p path/to/deeply/nested/folder

Best practice: Always use -p unless you specifically want an error when parent directories donโ€™t exist.

Copying Files and Directories

cp source.txt destination.txt

Copy creates a duplicate of a file:

# Copy and rename:
cp report.txt report_backup.txt

# Copy to another directory (keeping same name):
cp report.txt ~/Documents/

# Copy to another directory with new name:
cp report.txt ~/Documents/final_report.txt

For directories, use -r (recursive):

cp -r project/ project_backup/

Without -r, youโ€™ll get an error: cp: -r not specified; omitting directory 'project/'

Useful options:

  • -i: Interactiveโ€”prompt before overwriting
  • -v: Verboseโ€”show whatโ€™s being copied
  • -u: Updateโ€”only copy if source is newer than destination
  • -a: Archive modeโ€”preserves permissions, timestamps, and structure (ideal for backups)

Pro tip: Combine flags for safety and visibility:

cp -riv source/ destination/

Moving and Renaming

mv old_name.txt new_name.txt

Move serves double duty:

Renaming (destination in same directory):

mv draft.txt final.txt

Moving (destination in different directory):

mv final.txt ~/Documents/

Moving and renaming simultaneously:

mv draft.txt ~/Documents/final.txt

Moving directories (no -r flag needed):

mv old_folder/ new_location/

โš ๏ธ Warning: Unlike cp, mv doesnโ€™t have a built-in way to prevent overwriting. Use -i for safety:

mv -i source.txt destination.txt  # Prompts if destination exists

Deleting Files

rm filename.txt

Remove permanently deletes files. There is no โ€œRecycle Binโ€ or โ€œTrashโ€ on the command lineโ€”once removed, files are gone.

โš ๏ธ CRITICAL WARNING: The most dangerous command in Linux is:

sudo rm -rf /

NEVER RUN THIS. It recursively (-r) and forcefully (-f) deletes everything on your system, including the operating system itself.

Safe deletion practices:

# Delete a single file:
rm old_file.txt

# Delete with confirmation:
rm -i file.txt  # Prompts before deleting

# Delete multiple files:
rm file1.txt file2.txt file3.txt

# Delete directories (requires -r):
rm -r old_folder/

# Force deletion without prompts (use cautiously):
rm -rf temporary_folder/

Protecting yourself:

  1. Always double-check the path before using -r
  2. Use ls first to verify what youโ€™re about to delete
  3. Never use -rf together unless youโ€™re certain
  4. Consider aliasing rm to rm -i in your .bashrc for an automatic safety net

Alternative for empty directories:

rmdir empty_folder/

This only works on empty directories, providing a safety check against accidental deletion.

ln -s /path/to/original /path/to/link

Links are like shortcuts or references. The -s creates a symbolic (soft) linkโ€”the most commonly used type.

Symbolic links point to a file path:

ln -s /var/www/html/index.php ~/index_link.php

Now you can edit ~/index_link.php and the changes affect the original file in /var/www/html/.

Real-world use cases:

  • Creating shortcuts to deeply nested files
  • Maintaining multiple versions (link to the current version)
  • Organizing files without duplicating them
  • Cross-referencing configurations

Viewing links:

ls -l
# Output: lrwxrwxrwx ... index_link.php -> /var/www/html/index.php
#         ^
#         'l' indicates it's a link

Hard links (without -s) create a direct reference to file data:

ln original.txt hardlink.txt

Hard links are less common because they have limitations (canโ€™t span filesystems, canโ€™t link directories).

Identifying File Types

file mysterious_file

Determines what type of file something is, regardless of its extension (or lack thereof):

$ file script
script: POSIX shell script, ASCII text executable

$ file image.jpg
image.jpg: JPEG image data, JFIF standard 1.01

$ file compiled_program
compiled_program: ELF 64-bit LSB executable, x86-64

Why this matters: Unix doesnโ€™t rely on file extensions like Windows does. A file named document could be a text file, an image, or a program. The file command examines the actual content to tell you what it is.

Verifying File Integrity

md5sum filename
sha256sum filename

These commands generate cryptographic โ€œfingerprintsโ€ (checksums) of files:

$ sha256sum ubuntu-22.04.iso
b8f31413336b9393ad5d8ef0282717b2ab19f007df2e9ed5196c13d8f9153c8b  ubuntu-22.04.iso

Use cases:

  • Verify downloaded files havenโ€™t been corrupted or tampered with
  • Check if two files are identical without comparing them byte-by-byte
  • Detect changes in files (checksums change if even one bit changes)

Verification workflow:

# Download a file and its checksum:
wget https://example.com/file.zip
wget https://example.com/file.zip.sha256

# Verify:
sha256sum -c file.zip.sha256
# Output: file.zip: OK  (means it matches)

4. Reading and Viewing Files

Quick Output: cat

cat filename.txt

Concatenate dumps the entire contents of a file to your screen instantly. Perfect for short files or when you need to pipe content to another command.

Multiple files:

cat file1.txt file2.txt file3.txt  # Shows all files in sequence

Combining files:

cat part1.txt part2.txt > complete.txt

โš ๏ธ Common Mistake: Using cat on large files. If you accidentally run cat on a gigabyte-sized log file, your terminal will freeze while it tries to display millions of lines. Use less instead.

Quick tips:

  • cat -n: Number all lines
  • cat -A: Show all special characters (tabs, line endings, etc.)

Interactive Viewing: less

less large_file.log

A powerful pager for viewing files of any size. Unlike cat, it doesnโ€™t load the entire file into memoryโ€”you can view gigabyte-sized files instantly.

Essential controls:

  • Spacebar or PageDown: Next page
  • b or PageUp: Previous page
  • g: Jump to beginning
  • G: Jump to end
  • /search_term: Search forward
  • ?search_term: Search backward
  • n: Next search result
  • N: Previous search result
  • q: Quit

Why โ€œlessโ€ is more: The name is a play on an older program called more. The joke: โ€œless is more than more,โ€ meaning less has more features than more.

Pro tips:

less +F file.log  # Start in "follow" mode (like tail -f)
# Press Ctrl+C to stop following, then navigate normally

First and Last Lines

head filename.txt  # First 10 lines
tail filename.txt  # Last 10 lines

Custom line counts:

head -n 50 access.log  # First 50 lines
tail -n 100 error.log  # Last 100 lines

The killer feature: tail -f

tail -f /var/log/syslog

The -f (follow) flag watches a file in real-time, displaying new lines as theyโ€™re added. This is indispensable for:

  • Monitoring live log files
  • Watching build processes
  • Debugging applications in real-time

Stop following: Press Ctrl+C

Pro tip: Follow multiple files simultaneously:

tail -f /var/log/nginx/access.log /var/log/nginx/error.log

Reverse Text: rev

rev filename.txt

Reverses each line character-by-character:

Input:  Hello World
Output: dlroW olleH

Practical use? Honestly, itโ€™s rarely used except for:

  • Fun text manipulation
  • Certain data processing tasks
  • Reversing accidentally reversed text

The Universal Editor: vi/vim

vi filename.txt

Vi (and its improved version, Vim) is the most universally available text editorโ€”present on virtually every Unix-like system. Even if it seems arcane at first, knowing vi basics is essential for system administration.

Bare minimum survival guide:

  1. Opening: vi filename

  2. Modes:

    • Normal mode (default): For navigation and commands
    • Insert mode: For typing text (press i to enter)
    • Command mode: For saving/quitting (press : to enter)
  3. Basic workflow:

    • Press i to start inserting text
    • Type your content
    • Press Esc to return to Normal mode
    • Type :wq and press Enter to write and quit
  4. Emergency exit:

    • If youโ€™re stuck: Press Esc several times, then type :q! and press Enter
    • :q! quits without saving (overriding any warnings)

Why learn vi?

  • Itโ€™s the only editor guaranteed to be present on remote servers
  • Itโ€™s powerful once you overcome the initial learning curve
  • Many modern IDEs offer vim keybindings because theyโ€™re efficient

Alternatives if vi isnโ€™t your thing:

  • nano: Simpler, more intuitive for beginners
  • emacs: Powerful but requires installation on some systems

5. Searching: Finding Files and Text

Searching Inside Files: grep

grep "search_term" filename.txt

Global Regular Expression Print is your text search workhorse. It scans files line-by-line and outputs matching lines.

Basic examples:

# Find error messages in a log:
grep "ERROR" application.log

# Case-insensitive search:
grep -i "warning" system.log  # Matches WARNING, Warning, warning

# Show line numbers:
grep -n "TODO" script.sh
# Output: 42:# TODO: Fix this later

# Invert match (show lines that DON'T match):
grep -v "DEBUG" app.log  # Hide debug messages

# Count matches:
grep -c "success" results.txt
# Output: 127

Recursive search through directories:

grep -r "config_value" /etc/

This searches through all files in /etc/ and its subdirectoriesโ€”incredibly powerful for finding where a setting is defined.

Advanced options:

  • -A 3: Show 3 lines After each match (context)
  • -B 3: Show 3 lines Before each match
  • -C 3: Show 3 lines of Context (both before and after)
  • -E: Use extended regular expressions (more powerful patterns)
  • -w: Match whole words only
  • -x: Match whole lines only (exact)

Real-world power move:

grep -rn "import pandas" ~/projects/ --include="*.py"

Find all Python files in your projects that import pandas, showing line numbers.

โš ๏ธ Common Pitfall: Forgetting that grep returns an exit code. This matters in scripts:

if grep -q "error" log.txt; then
    echo "Errors found!"
fi

The -q (quiet) flag suppresses outputโ€”we only care about the exit code.

Searching for Files: find

find /starting/path -name "pattern"

While grep searches inside files, find searches for files themselves based on name, size, type, permissions, modification time, and more.

Search by name:

# Find all .log files:
find /var/log -name "*.log"

# Case-insensitive name search:
find /home -iname "*.JPG"  # Matches .jpg, .JPG, .Jpg, etc.

Search by type:

find /etc -type f  # Only files
find /tmp -type d  # Only directories
find /dev -type l  # Only symbolic links

Search by time:

# Modified in last 7 days:
find . -mtime -7

# Modified more than 30 days ago:
find . -mtime +30

# Modified exactly 5 days ago:
find . -mtime 5

# Accessed in last 24 hours:
find /var/log -atime -1

Search by size:

# Files larger than 100MB:
find /home -size +100M

# Files smaller than 1KB:
find . -size -1k

# Files between 10MB and 50MB:
find . -size +10M -size -50M

Combining criteria (AND logic is default):

# Large log files modified recently:
find /var/log -name "*.log" -size +10M -mtime -7

Executing commands on found files:

# Delete all .tmp files:
find /tmp -name "*.tmp" -delete

# Change permissions on all scripts:
find ~/scripts -name "*.sh" -exec chmod +x {} \;

# More efficient with xargs (see Section 6):
find . -name "*.txt" -print0 | xargs -0 wc -l

โš ๏ธ Warning: find with -delete or -exec rm is powerful and dangerous. Always test without the destructive action first:

# Test first:
find /tmp -name "*.tmp"
# If output looks right:
find /tmp -name "*.tmp" -delete

Pro tipโ€”excluding directories:

# Search but ignore node_modules:
find . -name "*.js" -not -path "*/node_modules/*"

Fast File Locating: locate

locate filename

Blazing fast filename search that works across your entire system. How? It searches a pre-built database instead of scanning the filesystem in real-time.

Advantages over find:

  • Incredibly fast (sub-second searches across millions of files)
  • Simple syntax

Disadvantages:

  • Database may be outdated (usually updated daily)
  • Only searches by filename (no size, time, or content filtering)

Updating the database:

sudo updatedb

Run this after creating or deleting many files if you need locate to find them immediately.

Case-insensitive search:

locate -i document.pdf

Limiting results:

locate -n 20 readme  # Show only first 20 matches

When to use locate vs. find:

  • Use locate when you vaguely remember a filename and need quick results
  • Use find when you need precise criteria (size, date, type) or the database might be stale

Finding Commands: apropos

apropos "search term"

Searches through man page descriptions to find relevant commands:

$ apropos "copy files"
cp (1)                   - copy files and directories
cpio (1)                 - copy files to and from archives
rsync (1)                - fast, versatile, remote file-copying tool

Use case: โ€œI need to do X, but I donโ€™t know which commandโ€ฆโ€ Just ask apropos.

Exact keyword match:

apropos -e networking

Comparing Files

Line-by-line comparison: diff

diff file1.txt file2.txt

Shows exactly what changed between two files:

3c3
< This is the old line
---
> This is the new line
7d6
< This line was deleted

Unified format (more readable):

diff -u file1.txt file2.txt

Side-by-side comparison:

diff -y file1.txt file2.txt

Comparing directories:

diff -r directory1/ directory2/

Practical use: Code reviews, configuration audits, troubleshooting changes.

Byte-by-byte comparison: cmp

cmp file1.bin file2.bin

Unlike diff (which compares text line-by-line), cmp compares files byte-by-byte. Essential for binary files like images, videos, or compiled programs.

Silent check (just the exit code):

cmp -s file1 file2 && echo "Files are identical"

Comparing sorted files: comm

comm file1.txt file2.txt

Requires both files to be sorted. Outputs three columns:

  1. Lines only in file1
  2. Lines only in file2
  3. Lines in both files

Suppress columns:

comm -12 file1.txt file2.txt  # Show only lines in both (intersection)
comm -23 file1.txt file2.txt  # Show only lines unique to file1

6. Advanced Text Processing: Power Tools

These commands transform raw text into structured information. Theyโ€™re the secret sauce behind command-line productivity.

Stream Editor: sed

sed 's/old/new/' filename.txt

Stream Editor performs find-and-replace and other transformations as text flows through it.

Basic substitution:

# Replace first occurrence per line:
sed 's/cat/dog/' pets.txt

# Replace all occurrences (g for global):
sed 's/cat/dog/g' pets.txt

# Replace and save to new file:
sed 's/cat/dog/g' pets.txt > updated_pets.txt

# Edit file in-place:
sed -i 's/cat/dog/g' pets.txt

โš ๏ธ Warning: -i modifies the original file. Use -i.bak to create a backup:

sed -i.bak 's/cat/dog/g' pets.txt  # Creates pets.txt.bak

Delete lines:

# Delete line 5:
sed '5d' file.txt

# Delete lines 10-20:
sed '10,20d' file.txt

# Delete lines matching a pattern:
sed '/^#/d' script.sh  # Remove comment lines
sed '/^$/d' file.txt   # Remove blank lines

Print specific lines:

# Print line 42:
sed -n '42p' large_file.txt

# Print lines 10-20:
sed -n '10,20p' file.txt

Multiple operations:

sed -e 's/cat/dog/g' -e 's/red/blue/g' file.txt

Real-world exampleโ€”configuration file update:

# Change database host in config:
sed -i 's/DB_HOST=localhost/DB_HOST=db.example.com/g' config.env

Pattern Scanner: awk

awk '{print $1}' file.txt

AWK is a complete programming language designed for text processing. Its superpower: effortlessly handling column-based data.

Understanding AWKโ€™s model:

  • AWK processes text line-by-line
  • Each line is split into fields (columns)
  • $1 is the first field, $2 is the second, etc.
  • $0 is the entire line

Basic field extraction:

# Print first column:
ls -l | awk '{print $9}'  # Filenames only

# Print multiple columns:
ls -l | awk '{print $9, $5}'  # Filename and size

# Reorder columns:
echo "John Doe 30" | awk '{print $3, $1, $2}'
# Output: 30 John Doe

Custom field separators:

# Default separator is whitespace, but you can change it:
awk -F':' '{print $1}' /etc/passwd  # Print all usernames

# Using comma as separator:
awk -F',' '{print $2}' data.csv

Conditional processing:

# Print lines where column 3 is greater than 100:
awk '$3 > 100' data.txt

# Print lines matching a pattern:
awk '/ERROR/ {print $1, $4}' log.txt

# Combine conditions:
awk '$3 > 100 && $5 == "active"' data.txt

Mathematical operations:

# Sum all numbers in column 2:
awk '{sum += $2} END {print sum}' numbers.txt

# Average:
awk '{sum += $1; count++} END {print sum/count}' data.txt

# Count lines:
awk 'END {print NR}' file.txt  # NR = Number of Records (lines)

Real-world examples:

Analyze access logs:

# Count requests per IP:
awk '{print $1}' access.log | sort | uniq -c | sort -nr | head

# Total bandwidth transferred (column 10 is bytes):
awk '{sum += $10} END {print sum/1024/1024 " MB"}' access.log

Parse CSV data:

# Extract email addresses from CSV:
awk -F',' '{print $3}' contacts.csv

# Filter high-value transactions:
awk -F',' '$4 > 1000 {print $1, $2, $4}' transactions.csv

Pro tip: AWK can replace many pipes:

# Instead of: cat file | grep pattern | awk '{print $2}'
# Just use:
awk '/pattern/ {print $2}' file

Simple Column Cutter: cut

cut -d',' -f1 data.csv

A simpler alternative to AWK for basic column extraction:

Extract specific fields:

# Field 1 (default delimiter is tab):
cut -f1 file.txt

# Fields 1 and 3:
cut -f1,3 file.txt

# Field range:
cut -f2-5 file.txt

# Custom delimiter:
cut -d':' -f1 /etc/passwd  # Extract usernames
cut -d',' -f2,4 data.csv   # Extract columns 2 and 4 from CSV

Character-based extraction:

# First 10 characters of each line:
cut -c1-10 file.txt

# Characters 5 through 15:
cut -c5-15 file.txt

# Everything from character 20 onward:
cut -c20- file.txt

When to use cut vs. awk:

  • Use cut for simple, single-delimiter column extraction
  • Use awk for complex conditions, calculations, or multiple delimiters

Sorting Lines: sort

sort filename.txt

Arranges lines alphabetically or numerically:

Basic sorting:

# Alphabetical (default):
sort names.txt

# Reverse order:
sort -r names.txt

# Numeric sort (critical for numbers):
sort -n numbers.txt

Why -n matters:

# Without -n (alphabetical):
echo -e "1\n10\n2\n20" | sort
# Output: 1, 10, 2, 20 (wrong!)

# With -n (numeric):
echo -e "1\n10\n2\n20" | sort -n
# Output: 1, 2, 10, 20 (correct!)

Sort by specific column:

# Sort by second column, numerically:
sort -k2 -n data.txt

# Sort by third column, reverse:
sort -k3 -r data.txt

# Multiple sort keys:
sort -k1,1 -k2n data.txt  # Sort by column 1, then by column 2 numerically

Advanced options:

# Ignore leading blanks:
sort -b file.txt

# Case-insensitive:
sort -f names.txt

# Human-readable numbers (understands K, M, G):
du -h * | sort -h

# Random shuffle:
sort -R file.txt

# Unique sort (remove duplicates while sorting):
sort -u file.txt

Real-world exampleโ€”find largest directories:

du -sh * | sort -h | tail -10

Remove Duplicate Lines: uniq

uniq file.txt

Removes adjacent duplicate linesโ€”this is crucial to understand.

โš ๏ธ Critical Pitfall: uniq only removes duplicates that are next to each other:

# This WON'T work as expected:
echo -e "apple\nbanana\napple" | uniq
# Output: apple, banana, apple (duplicate remains!)

# This WILL work:
echo -e "apple\nbanana\napple" | sort | uniq
# Output: apple, banana

Best practice: Always pipe through sort first:

sort file.txt | uniq

Count occurrences:

sort file.txt | uniq -c
# Output:
#   3 apple
#   1 banana
#   2 cherry

Show only duplicates:

sort file.txt | uniq -d

Show only unique lines (no duplicates):

sort file.txt | uniq -u

Real-world examples:

Count unique visitors in access log:

awk '{print $1}' access.log | sort | uniq | wc -l

Find most common error messages:

grep ERROR app.log | sort | uniq -c | sort -nr | head -10

Character Translation: tr

tr 'abc' 'xyz'

Translates or deletes charactersโ€”works on standard input only:

Character substitution:

# Convert lowercase to uppercase:
echo "hello world" | tr 'a-z' 'A-Z'
# Output: HELLO WORLD

# Convert uppercase to lowercase:
echo "HELLO WORLD" | tr 'A-Z' 'a-z'
# Output: hello world

# ROT13 encoding:
echo "Hello" | tr 'A-Za-z' 'N-ZA-Mn-za-m'

Delete characters:

# Remove all digits:
echo "Phone: 555-1234" | tr -d '0-9'
# Output: Phone: -

# Remove all spaces:
echo "too  many   spaces" | tr -d ' '
# Output: toomanyspaces

# Remove newlines:
cat multiline.txt | tr -d '\n'

Squeeze repeated characters:

# Collapse multiple spaces to single space:
echo "too    many     spaces" | tr -s ' '
# Output: too many spaces

# Remove duplicate letters:
echo "bookkeeper" | tr -s 'a-z'
# Output: bokeper

Complement (invert the set):

# Keep only alphanumeric characters:
echo "Hello, World! 123" | tr -cd 'A-Za-z0-9'
# Output: HelloWorld123

# Remove everything except newlines (one word per line):
cat file.txt | tr -cs 'A-Za-z' '\n'

Real-world uses:

Convert DOS line endings to Unix:

tr -d '\r' < dos_file.txt > unix_file.txt

Generate random passwords:

tr -dc 'A-Za-z0-9!@#$%' < /dev/urandom | head -c 20

Word, Line, and Byte Counting: wc

wc filename.txt

Word Count provides statistics about text:

Default output:

$ wc document.txt
  45  312 2048 document.txt
  โ”‚   โ”‚   โ”‚    โ””โ”€ filename
  โ”‚   โ”‚   โ””โ”€โ”€โ”€โ”€โ”€โ”€ bytes
  โ”‚   โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ words
  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ lines

Specific counts:

wc -l file.txt  # Lines only (most common)
wc -w file.txt  # Words only
wc -c file.txt  # Bytes only
wc -m file.txt  # Characters (may differ from bytes with Unicode)
wc -L file.txt  # Length of longest line

Multiple files:

$ wc -l *.txt
  100 file1.txt
  200 file2.txt
  150 file3.txt
  450 total

Real-world examples:

Count files in directory:

ls | wc -l

Count lines of code in project:

find . -name "*.py" -exec cat {} \; | wc -l

Monitor log growth rate:

# Before:
wc -l app.log
# ... wait some time ...
# After:
wc -l app.log  # Compare the numbers

Count occurrences of a pattern:

grep -r "TODO" src/ | wc -l

Pipe Splitter: tee

command | tee output.txt

Splits a pipeline: sends output to both a file and the screen (or next command).

Basic usage:

# See output AND save it:
ls -la | tee file_list.txt

# Long-running commandโ€”monitor and save:
./build_script.sh | tee build.log

Append instead of overwrite:

echo "New entry" | tee -a log.txt

Multiple outputs:

echo "Important" | tee file1.txt file2.txt file3.txt

Combining with sudo:

# This WON'T work (sudo doesn't apply to redirection):
sudo echo "nameserver 8.8.8.8" > /etc/resolv.conf

# This WILL work:
echo "nameserver 8.8.8.8" | sudo tee /etc/resolv.conf

# Append with sudo:
echo "option timeout:1" | sudo tee -a /etc/resolv.conf

Real-world patternโ€”save and continue processing:

# Save intermediate results while continuing pipeline:
cat data.txt | tee raw_data.txt | grep "ERROR" | tee errors.txt | wc -l

Pro tipโ€”silent output:

# Save to file without screen output:
command | tee file.txt > /dev/null

Argument Builder: xargs

command1 | xargs command2

Converts input into arguments for another command. This solves a fundamental problem: many commands donโ€™t read from standard inputโ€”they need arguments.

The problem xargs solves:

# This doesn't work (rm doesn't read filenames from stdin):
find . -name "*.tmp" | rm

# This works:
find . -name "*.tmp" | xargs rm

Basic usage:

# Delete files returned by find:
find . -name "*.log" | xargs rm

# Create directories:
echo "dir1 dir2 dir3" | xargs mkdir

# Download multiple URLs:
cat urls.txt | xargs wget

Handling spaces and special characters:

# UNSAFE (breaks with spaces in filenames):
find . -name "*.txt" | xargs rm

# SAFE (use null delimiter):
find . -name "*.txt" -print0 | xargs -0 rm

The -print0 and -0 combination uses null bytes (\0) as delimiters instead of spaces, making it safe for filenames with spaces, quotes, or other special characters.

Control execution:

# Run command once per item (-n 1):
echo "file1 file2 file3" | xargs -n 1 echo "Processing:"
# Output:
# Processing: file1
# Processing: file2
# Processing: file3

# Parallel execution (-P):
find . -name "*.jpg" | xargs -P 4 -I {} convert {} {}.optimized.jpg
# Processes 4 images simultaneously

Interactive prompting:

# Confirm before each execution:
find . -name "*.tmp" | xargs -p rm
# Prompts: rm ./file1.tmp?...

Replace string:

# Use {} as placeholder:
find . -name "*.txt" | xargs -I {} cp {} {}.backup

# Custom placeholder:
cat hostnames.txt | xargs -I HOST ssh HOST "df -h"

Real-world examples:

Batch rename files:

ls *.jpeg | xargs -I {} bash -c 'mv {} $(echo {} | sed s/jpeg/jpg/)'

Check which servers are up:

cat servers.txt | xargs -I {} -P 10 ping -c 1 {}

Find and replace across multiple files:

grep -l "old_term" *.txt | xargs sed -i 's/old_term/new_term/g'

Compress large files in parallel:

find . -name "*.log" -size +100M -print0 | xargs -0 -P 4 gzip

7. Users, Permissions, and Access Control

Linux is a multi-user system with robust permission controls. Understanding these concepts is essential for both security and day-to-day operations.

Identifying Yourself

whoami

Shows your current username:

$ whoami
akib

When it matters: After using su to switch users, or in scripts where you need to check whoโ€™s running the code.

Detailed User Information

id

Displays your user ID (UID), group ID (GID), and all group memberships:

$ id
uid=1000(akib) gid=1000(akib) groups=1000(akib),27(sudo),998(docker)

What this tells you:

  • uid=1000(akib): Your user ID is 1000, username is โ€œakibโ€
  • gid=1000(akib): Your primary group ID is 1000, group name is โ€œakibโ€
  • groups=...: Youโ€™re also in the โ€œsudoโ€ and โ€œdockerโ€ groups

Why it matters: Group membership determines what you can access. Being in the โ€œsudoโ€ group means you can run admin commands. Being in the โ€œdockerโ€ group means you can run Docker containers without sudo.

Check another user:

id username

List Group Memberships

groups

Simpler than idโ€”just lists group names:

$ groups
akib sudo docker www-data

Check another userโ€™s groups:

groups username

Execute as Administrator: sudo

sudo command

Superuser Do lets you run individual commands with root privileges:

# Install software:
sudo apt install nginx

# Edit system files:
sudo nano /etc/hosts

# Restart services:
sudo systemctl restart apache2

# View protected files:
sudo cat /var/log/auth.log

How it works:

  1. You enter your password (not rootโ€™s password)
  2. System checks if youโ€™re in the sudo group
  3. Command runs with root privileges
  4. Your password is cached for ~15 minutes

Running multiple commands:

# Start a root shell:
sudo -i  # Login shell (loads root's environment)
sudo -s  # Shell (preserves your environment)

# Run specific shell as root:
sudo bash

Run as different user:

sudo -u username command

Preserve environment variables:

sudo -E command  # Keeps your environment

Best practices:

  • Only use sudo when necessary
  • Never run untrusted scripts with sudo
  • Review what a command does before adding sudo
  • Use sudo -i for multiple admin tasks, then exit when done

โš ๏ธ Security Warning: The phrase โ€œwith great power comes great responsibilityโ€ was practically invented for sudo. One mistyped command can destroy your system.

Switch Users: su

su username

Substitute User switches your entire session to another account:

# Become root:
su
# or
su root

# Become another user:
su - john  # The dash loads john's environment

Difference from sudo:

  • su requires the target userโ€™s password
  • sudo requires your password
  • su switches your entire session
  • sudo runs one command

Why sudo is preferred:

  • More auditable (logs show who did what)
  • More granular (can limit what commands users can run)
  • Doesnโ€™t require sharing the root password
  • Automatically times out

Return to original user:

exit

Understanding File Permissions

Every file and directory has permissions that control who can read, write, or execute it.

Viewing permissions:

$ ls -l script.sh
-rwxr-xr-- 1 akib developers 2048 Oct 24 10:30 script.sh
โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚
โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚
โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ””โ”ดโ”€ Other users: r-- (read only)
โ”‚โ”‚โ”‚โ”‚โ”‚โ”‚โ””โ”€โ”€โ”€โ”€ Group: r-x (read and execute)
โ”‚โ”‚โ”‚โ”‚โ”‚โ””โ”€โ”€โ”€โ”€โ”€ Owner: rwx (read, write, execute)
โ”‚โ”‚โ”‚โ”‚โ””โ”€โ”€โ”€โ”€โ”€โ”€ ACLs indicator
โ”‚โ”‚โ”‚โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Number of hard links
โ”‚โ”‚โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ File type: - (regular file)
โ”‚โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Also applies to: d (directory), l (link)

Permission breakdown:

  • r (read): View file contents / List directory contents
  • w (write): Modify file / Create/delete files in directory
  • x (execute): Run file as program / Enter directory

Three permission sets:

  1. Owner (user who created the file)
  2. Group (users in the fileโ€™s group)
  3. Others (everyone else)

Changing Permissions: chmod

chmod permissions file

Symbolic method (human-readable):

# Add execute permission for owner:
chmod u+x script.sh

# Remove write permission for others:
chmod o-w document.txt

# Add read permission for group:
chmod g+r data.txt

# Set exact permissions:
chmod u=rwx,g=rx,o=r file.txt

# Multiple changes:
chmod u+x,g+x,o-w script.sh

Symbols:

  • u = user (owner)
  • g = group
  • o = others
  • a = all (user, group, and others)

Operators:

  • + = add permission
  • - = remove permission
  • = = set exact permission

Octal method (numeric):

Each permission set is represented by a three-digit octal number:

r = 4
w = 2
x = 1

Add them up:

  • 7 (4+2+1) = rwx
  • 6 (4+2) = rw-
  • 5 (4+1) = r-x
  • 4 = rโ€“
  • 0 = โ€”

Common patterns:

# rwxr-xr-x (755): Owner full, others read/execute
chmod 755 script.sh

# rw-r--r-- (644): Owner read/write, others read-only
chmod 644 document.txt

# rwx------ (700): Only owner can access
chmod 700 private_script.sh

# rw-rw-r-- (664): Owner and group can edit, others read
chmod 664 shared_doc.txt

Recursive (apply to all files in directory):

chmod -R 755 /var/www/html/

Real-world examples:

Make script executable:

chmod +x deploy.sh
./deploy.sh  # Now you can run it

Secure SSH keys:

chmod 600 ~/.ssh/id_rsa  # Private keys must be owner-only
chmod 644 ~/.ssh/id_rsa.pub  # Public keys can be readable

Fix web server permissions:

# Directories: 755 (browsable)
find /var/www -type d -exec chmod 755 {} \;
# Files: 644 (readable)
find /var/www -type f -exec chmod 644 {} \;

Changing Ownership: chown

chown owner:group file

Changes who owns a file:

# Change owner only:
sudo chown john file.txt

# Change owner and group:
sudo chown john:developers file.txt

# Change group only:
sudo chown :developers file.txt
# or use chgrp:
sudo chgrp developers file.txt

# Recursive:
sudo chown -R www-data:www-data /var/www/html/

Why you need sudo: Only root can change file ownership (security feature).

Real-world use case: After extracting files as root, change ownership to regular user:

sudo tar -xzf archive.tar.gz
sudo chown -R $USER:$USER extracted_folder/

Fix web application permissions:

# Web server needs to own web files:
sudo chown -R www-data:www-data /var/www/myapp/

# But you need to edit them:
sudo usermod -aG www-data $USER  # Add yourself to www-data group

Changing Your Password

passwd

Prompts you to change your password:

$ passwd
Changing password for akib.
Current password:
New password:
Retype new password:
passwd: password updated successfully

Change another userโ€™s password (as root):

sudo passwd username

Password requirements:

  • Usually minimum 8 characters
  • Mix of letters, numbers, symbols
  • Not based on dictionary words
  • Different from previous passwords

Best practices:

  • Use a password manager
  • Use strong, unique passwords for each system
  • Enable two-factor authentication when available
  • Change passwords periodically, especially after security incidents

8. Process and System Management

Understanding and controlling what your system is doing.

Viewing Processes: ps

ps

Process Status shows currently running processes:

Basic output:

$ ps
  PID TTY          TIME CMD
 1234 pts/0    00:00:00 bash
 5678 pts/0    00:00:00 ps

Show all processes:

ps aux  # BSD style (no dash)
ps -ef  # Unix style (with dash)

Both show similar informationโ€”choose whichever you prefer.

Understanding ps aux output:

USER  PID %CPU %MEM    VSZ   RSS TTY   STAT START   TIME COMMAND
akib  1234  0.5  2.1 123456 12345 pts/0 S    10:30   0:05 python app.py
โ”‚     โ”‚     โ”‚    โ”‚     โ”‚      โ”‚    โ”‚    โ”‚    โ”‚       โ”‚    โ””โ”€ command
โ”‚     โ”‚     โ”‚    โ”‚     โ”‚      โ”‚    โ”‚    โ”‚    โ”‚       โ””โ”€โ”€โ”€โ”€โ”€โ”€ CPU time used
โ”‚     โ”‚     โ”‚    โ”‚     โ”‚      โ”‚    โ”‚    โ”‚    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ start time
โ”‚     โ”‚     โ”‚    โ”‚     โ”‚      โ”‚    โ”‚    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ state
โ”‚     โ”‚     โ”‚    โ”‚     โ”‚      โ”‚    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ terminal
โ”‚     โ”‚     โ”‚    โ”‚     โ”‚      โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ resident memory (KB)
โ”‚     โ”‚     โ”‚    โ”‚     โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ virtual memory (KB)
โ”‚     โ”‚     โ”‚    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ % of RAM
โ”‚     โ”‚     โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ % of CPU
โ”‚     โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ process ID
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ user

Process states:

  • R: Running
  • S: Sleeping (waiting for an event)
  • D: Uninterruptible sleep (usually I/O)
  • Z: Zombie (finished but not cleaned up)
  • T: Stopped (paused)

Find specific processes:

ps aux | grep python
ps aux | grep -i apache

Show process tree (parent-child relationships):

ps auxf  # Forest view
pstree   # Dedicated tree view

Sort by CPU usage:

ps aux --sort=-%cpu | head

Sort by memory usage:

ps aux --sort=-%mem | head

Real-Time Process Monitoring: top and htop

top

Interactive, real-time view of system processes:

Essential top commands:

  • q: Quit
  • k: Kill a process (prompts for PID)
  • M: Sort by memory usage
  • P: Sort by CPU usage
  • 1: Show individual CPU cores
  • h: Help
  • u: Filter by username
  • Spacebar: Refresh immediately

Understanding the top display:

top - 14:32:01 up 5 days, 2:17, 3 users, load average: 0.45, 0.62, 0.58
Tasks: 187 total, 1 running, 186 sleeping, 0 stopped, 0 zombie
%Cpu(s): 12.3 us, 3.1 sy, 0.0 ni, 84.1 id, 0.5 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem:  15842.5 total, 2341.2 free, 8234.7 used, 5266.6 buff/cache
MiB Swap:  2048.0 total, 2048.0 free, 0.0 used. 6892.4 avail Mem

Load average explained:

  • Three numbers: 1-minute, 5-minute, 15-minute averages
  • Represents number of processes waiting for CPU time
  • On a 4-core system, load of 4.0 means fully utilized
  • Load > number of cores = system is overloaded

Better alternative: htop

htop

A more user-friendly version with:

  • Color-coded display
  • Mouse support
  • Easier process killing
  • Tree view by default
  • Better visual representation of CPU and memory

Install htop:

sudo apt install htop  # Debian/Ubuntu
sudo yum install htop  # Red Hat/CentOS

โš ๏ธ Common Mistake: Panicking when you see high CPU usage in top. Check if itโ€™s legitimate activity before killing processes.

Terminating Processes: kill

kill PID

Sends signals to processesโ€”usually to terminate them:

Basic usage:

# Graceful termination (SIGTERM):
kill 1234

# Force kill (SIGKILL):
kill -9 1234

Signal types:

  • SIGTERM (15, default): โ€œPlease terminate gracefullyโ€
    • Allows process to clean up (save files, close connections)
    • Can be ignored by the process
  • SIGKILL (9): โ€œDie immediatelyโ€
    • Cannot be ignored or caught
    • No cleanupโ€”data loss possible
    • Use as last resort

Other useful signals:

kill -HUP 1234   # Hang up (often makes daemons reload config)
kill -STOP 1234  # Pause process
kill -CONT 1234  # Resume paused process

Kill by name:

killall process_name  # Kill all processes with this name
pkill pattern         # Kill processes matching pattern

Examples:

# Kill all Python processes:
killall python3

# Kill all processes owned by user:
pkill -u username

# Kill frozen Firefox:
killall -9 firefox

Finding the PID:

# Method 1:
ps aux | grep program_name

# Method 2:
pgrep program_name

# Method 3:
pidof program_name

โš ๏ธ Warning: Always try regular kill before kill -9. Forcing termination can lead to:

  • Lost unsaved work
  • Corrupted files
  • Orphaned processes
  • Resource leaks

Job Control: bg, fg, jobs

When you start a program from the terminal, itโ€™s a โ€œforeground jobโ€ that takes over your prompt. Job control lets you manage multiple programs.

Suspend current job: Press Ctrl+Z to pause the foreground job:

$ python long_script.py
^Z
[1]+  Stopped                 python long_script.py

List jobs:

$ jobs
[1]+  Stopped                 python long_script.py
[2]-  Running                 npm start &

Resume in foreground:

fg %1  # Resume job 1 in foreground

Resume in background:

bg %1  # Job 1 continues running, but you get your prompt back

Start job in background immediately:

long_running_command &  # Ampersand runs it in background

Real-world workflow:

# Start editing a file:
vim document.txt

# Realize you need to check something:
# Press Ctrl+Z to suspend vim

# Run other commands:
ls -la
cat other_file.txt

# Go back to editing:
fg

# Or start a long task while editing:
bg  # Continue vim in background (if it supports it)

โš ๏ธ Limitation: Background jobs still output to the terminal. For true detachment, use nohup or terminal multiplexers.

Run After Logout: nohup

nohup command &

No Hang Up makes a process immune to logoutโ€”essential for long-running tasks on remote servers:

# Start a long backup:
nohup ./backup_script.sh &

# Start a development server:
nohup npm start &

# Output goes to nohup.out by default:
tail -f nohup.out

Redirect output:

nohup ./script.sh > output.log 2>&1 &

Explanation:

  • nohup: Ignore hangup signals
  • > output.log: Redirect stdout
  • 2>&1: Redirect stderr to same place as stdout
  • &: Run in background

Check if itโ€™s running:

ps aux | grep script.sh

Better alternative for remote work: Use tmux or screen (see Advanced Techniques section).

Disk Space: df

df -h

Disk Free shows available disk space per filesystem:

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        50G   35G   13G  74% /
/dev/sdb1       500G  350G  125G  74% /home
tmpfs           7.8G  1.2M  7.8G   1% /dev/shm

What it shows:

  • Filesystem: Device or partition
  • Size: Total capacity
  • Used: Space consumed
  • Avail: Space remaining
  • Use%: Percentage full
  • Mounted on: Where itโ€™s accessible in the directory tree

โš ๏ธ Warning: When a disk hits 100%, things break:

  • Canโ€™t save files
  • Logs canโ€™t write (applications fail)
  • System becomes unstable

Quick checks:

df -h /          # Check root partition
df -h /home      # Check home partition
df -h --total    # Show grand total

Find largest filesystems:

df -h | sort -h -k3  # Sort by usage

Directory Sizes: du

du -sh directory/

Disk Usage shows how much space files and directories consume:

# Summary of directory:
du -sh ~/Downloads/
# Output: 2.3G    /home/akib/Downloads/

# Summarize each subdirectory:
du -sh ~/Documents/*
# Output:
# 150M    /home/akib/Documents/Work
# 3.2G    /home/akib/Documents/Projects
# 45M     /home/akib/Documents/Personal

# Show all files and directories (recursive):
du -h ~/Projects/

Options:

  • -s: Summary (donโ€™t show subdirectories)
  • -h: Human-readable sizes
  • -c: Show grand total
  • --max-depth=N: Limit recursion depth

Find disk hogs:

# Top 10 largest directories:
du -sh /* | sort -h | tail -10

# Or more accurate:
du -h --max-depth=1 / | sort -h | tail -10

Find large files:

find / -type f -size +100M -exec du -h {} \; | sort -h

Real-world troubleshooting:

# "Disk full" alertโ€”find the culprit:
du -sh /* | sort -h | tail -5
# Drill down into the largest directory:
du -sh /var/* | sort -h | tail -5
# Continue until you find the problem:
du -sh /var/log/* | sort -h | tail -5

System Information: uname

uname -a

Shows kernel and system information:

$ uname -a
Linux myserver 5.15.0-56-generic #62-Ubuntu SMP Thu Nov 24 13:31:22 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

Individual components:

uname -s  # Kernel name: Linux
uname -n  # Network name: myserver
uname -r  # Kernel release: 5.15.0-56-generic
uname -v  # Kernel version: #62-Ubuntu SMP Thu Nov 24...
uname -m  # Machine hardware: x86_64
uname -o  # Operating system: GNU/Linux

Practical use:

# Check if you're on 64-bit:
uname -m
# x86_64 = 64-bit, i686 = 32-bit

# Get kernel version for bug reports:
uname -r

Hostname

hostname

Shows or sets the systemโ€™s network name:

$ hostname
myserver.example.com

# Show just the short name:
$ hostname -s
myserver

# Show IP addresses:
$ hostname -I
192.168.1.100 10.0.0.50

Change hostname (temporary):

sudo hostname newname

Change hostname (permanent):

# Ubuntu/Debian:
sudo hostnamectl set-hostname newname

# Older systems:
sudo nano /etc/hostname  # Edit file
sudo nano /etc/hosts     # Update 127.0.1.1 entry

System Shutdown and Reboot

reboot
shutdown

Control system power state (requires sudo):

Reboot immediately:

sudo reboot

Shutdown immediately:

sudo shutdown -h now

Shutdown with delay:

sudo shutdown -h +10  # Shutdown in 10 minutes
sudo shutdown -h 23:00  # Shutdown at 11 PM

Reboot with delay:

sudo shutdown -r +5  # Reboot in 5 minutes

Cancel scheduled shutdown:

sudo shutdown -c

Broadcast message to users:

sudo shutdown -h +10 "System maintenance in 10 minutes"

Alternative commands:

sudo poweroff  # Immediate shutdown
sudo halt      # Stop the system (older method)
sudo init 0    # Shutdown (runlevel 0)
sudo init 6    # Reboot (runlevel 6)

9. Networking Essentials

Testing Connectivity: ping

ping hostname

Checks if you can reach a remote host:

$ ping google.com
PING google.com (142.250.185.46) 56(84) bytes of data.
64 bytes from lga34s34-in-f14.1e100.net (142.250.185.46): icmp_seq=1 ttl=117 time=12.3 ms
64 bytes from lga34s34-in-f14.1e100.net (142.250.185.46): icmp_seq=2 ttl=117 time=11.8 ms

Understanding output:

  • 64 bytes: Packet size
  • icmp_seq: Packet sequence number
  • ttl: Time To Live (hops remaining)
  • time: Round-trip latency in milliseconds

Stop pinging: Press Ctrl+C to stop. Youโ€™ll see statistics:

--- google.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 11.532/12.015/12.847/0.518 ms

Useful options:

# Send specific number of pings:
ping -c 4 google.com

# Set interval (1 second default):
ping -i 0.5 example.com  # Ping every 0.5 seconds

# Flood ping (requires root):
sudo ping -f 192.168.1.1  # As fast as possible (testing)

# Set packet size:
ping -s 1000 example.com  # 1000-byte packets

Troubleshooting scenarios:

No response:

$ ping 192.168.1.50
PING 192.168.1.50 (192.168.1.50) 56(84) bytes of data.
^C
--- 192.168.1.50 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss

Causes: Host down, network unreachable, firewall blocking ICMP

High latency:

time=523 ms  # Should be <50ms for LAN, <100ms for internet

Causes: Network congestion, bad connection, routing issues

Packet loss:

10 packets transmitted, 7 received, 30% packet loss

Causes: Weak WiFi, network congestion, failing hardware

Remote Access: ssh

ssh user@hostname

Secure Shell connects you to remote Linux systems securely:

Basic connection:

ssh akib@192.168.1.100
ssh admin@server.example.com

Custom port:

ssh -p 2222 user@hostname

Execute single command:

ssh user@server "df -h"
ssh user@server "systemctl status nginx"

X11 forwarding (run GUI apps remotely):

ssh -X user@server
# Then run GUI programsโ€”they display on your local screen

Verbose output (troubleshooting):

ssh -v user@server   # Verbose
ssh -vvv user@server # Very verbose

SSH config file (~/.ssh/config): Make connections easier:

Host myserver
    HostName server.example.com
    User akib
    Port 22
    IdentityFile ~/.ssh/id_rsa

Host prod
    HostName 203.0.113.50
    User admin
    Port 2222

Now just type:

ssh myserver
ssh prod

Key-based authentication (covered in Advanced Techniques):

  • More secure than passwords
  • No password typing required
  • Essential for automation

โš ๏ธ Security Best Practices:

  • Never use root account directly (use sudo instead)
  • Disable password authentication (use keys only)
  • Use non-standard ports
  • Enable fail2ban to block brute-force attacks
  • Keep SSH updated

File Synchronization: rsync

rsync source destination

Remote Sync is the Swiss Army knife of file copyingโ€”efficient, powerful, and network-aware:

Basic local copy:

rsync -av source/ destination/

Essential options:

  • -a: Archive mode (preserves permissions, timestamps, symbolic links)
  • -v: Verbose (show files being transferred)
  • -z: Compress during transfer
  • -h: Human-readable sizes
  • -P: Show Progress + keep partial files

Best practice combination:

rsync -avzP source/ destination/

Remote copying:

# Upload to remote server:
rsync -avz /local/path/ user@server:/remote/path/

# Download from remote server:
rsync -avz user@server:/remote/path/ /local/path/

Important trailing slash behavior:

# With trailing slashโ€”copy CONTENTS:
rsync -av source/ destination/
# Result: destination contains files from source

# Without trailing slashโ€”copy DIRECTORY:
rsync -av source destination/
# Result: destination/source/ contains the files

Delete files in destination not in source:

rsync -av --delete source/ destination/

Dry run (preview what would happen):

rsync -avn --delete source/ destination/
# -n = dry run (no changes made)

Exclude files:

# Exclude pattern:
rsync -av --exclude '*.tmp' source/ dest/

# Multiple excludes:
rsync -av --exclude '*.log' --exclude 'node_modules/' source/ dest/

# Exclude file list:
rsync -av --exclude-from='exclude-list.txt' source/ dest/

Resume interrupted transfers:

rsync -avP source/ dest/  # -P enables partial file resumption

Real-world examples:

Backup entire home directory:

rsync -avzP --delete ~/  /mnt/backup/home/

Mirror website to remote server:

rsync -avz --delete /var/www/html/ user@webserver:/var/www/html/

Sync with bandwidth limit:

rsync -avz --bwlimit=1000 large-files/ user@server:/path/
# Limit to 1000 KB/s

Why rsync beats scp:

  • Only transfers changed parts of files (delta transfer)
  • Can resume interrupted transfers
  • More options for filtering and control
  • Better for large transfers or slow connections

Network Information: ip

ip addr show

Modern tool for viewing and configuring network interfaces (replaces older ifconfig):

Show all network interfaces:

$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    inet 127.0.0.1/8 scope host lo
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP
    inet 192.168.1.100/24 brd 192.168.1.255 scope global eth0

Abbreviated versions:

ip a         # Short for 'ip addr show'
ip addr      # Same thing
ip link show # Show link-layer information
ip link      # Abbreviated

Show specific interface:

ip addr show eth0
ip addr show wlan0

Show routing table:

ip route show
# or
ip r

Show statistics:

ip -s link  # Interface statistics (packets, errors)

Common tasks:

Add IP address (temporary):

sudo ip addr add 192.168.1.50/24 dev eth0

Remove IP address:

sudo ip addr del 192.168.1.50/24 dev eth0

Bring interface up/down:

sudo ip link set eth0 up
sudo ip link set eth0 down

โš ๏ธ Note: Changes with ip are temporaryโ€”theyโ€™re lost on reboot. Permanent changes require editing network configuration files (location varies by distribution).

Downloading Files: wget and curl

Both download files from the web, but with different philosophies:

wget: The Downloader

wget URL

Designed specifically for downloading files:

Basic download:

wget https://example.com/file.zip

Save with custom name:

wget -O custom_name.zip https://example.com/file.zip

Resume interrupted download:

wget -c https://example.com/large_file.iso

Download multiple files:

wget -i urls.txt  # File containing list of URLs

Background download:

wget -b https://example.com/file.zip
tail -f wget-log  # Monitor progress

Recursive download (mirror site):

wget -r -np -k https://example.com/docs/
# -r = recursive
# -np = no parent (don't go up in directory structure)
# -k = convert links for local viewing

Limit download speed:

wget --limit-rate=200k https://example.com/file.zip

Authentication:

wget --user=username --password=pass https://example.com/file.zip

curl: The Swiss Army Knife

curl URL

More versatileโ€”can handle uploads, APIs, and complex protocols:

Basic download (outputs to stdout):

curl https://example.com/file.txt

Save to file:

curl -o filename.txt https://example.com/file.txt
# or preserve remote filename:
curl -O https://example.com/file.txt

Follow redirects:

curl -L https://example.com/redirect

Show progress:

curl -# -O https://example.com/file.zip  # Progress bar

API requests:

# GET request:
curl https://api.example.com/users

# POST request with data:
curl -X POST -d "name=John&email=john@example.com" https://api.example.com/users

# JSON POST:
curl -X POST -H "Content-Type: application/json" \
     -d '{"name":"John","email":"john@example.com"}' \
     https://api.example.com/users

# With authentication:
curl -u username:password https://api.example.com/data

Headers:

# Show response headers:
curl -i https://example.com

# Show only headers:
curl -I https://example.com

# Custom headers:
curl -H "Authorization: Bearer TOKEN" https://api.example.com/data

Upload files:

curl -F "file=@document.pdf" https://example.com/upload

When to use which:

  • wget: Downloading files, mirroring websites, resume capability
  • curl: API testing, complex requests, headers, uploads

10. Archives and Compression

The Tape Archive: tar

tar options archive.tar files

Originally designed for Tape Archives, tar bundles multiple files into a single file (without compression):

Essential operations:

Create archive:

tar -cvf archive.tar file1 file2 directory/
# -c = create
# -v = verbose
# -f = filename

Extract archive:

tar -xvf archive.tar
# -x = extract

List contents:

tar -tvf archive.tar
# -t = list

Compressed archives:

Most tar archives are also compressed. The flag indicates compression type:

Gzip (.tar.gz or .tgz):

# Create:
tar -czvf archive.tar.gz directory/
# -z = gzip compression

# Extract:
tar -xzvf archive.tar.gz

# Extract to specific directory:
tar -xzvf archive.tar.gz -C /target/directory/

Bzip2 (.tar.bz2):

# Create (better compression, slower):
tar -cjvf archive.tar.bz2 directory/
# -j = bzip2 compression

# Extract:
tar -xjvf archive.tar.bz2

XZ (.tar.xz):

# Create (best compression, slowest):
tar -cJvf archive.tar.xz directory/
# -J = xz compression

# Extract:
tar -xJvf archive.tar.xz

Advanced options:

Exclude files:

tar -czvf backup.tar.gz --exclude='*.tmp' --exclude='node_modules' ~/project/

Extract specific files:

tar -xzvf archive.tar.gz path/to/specific/file

Preserve permissions:

tar -cpzvf archive.tar.gz directory/
# -p = preserve permissions

Append to existing archive:

tar -rvf archive.tar newfile.txt
# -r = append

Update archive (only newer files):

tar -uvf archive.tar directory/
# -u = update

Mnemonic for remembering flags:

  • Create: Create Zipped File โ†’ -czf
  • Extract: eXtract Zipped File โ†’ -xzf
  • List: Table of Verbose Files โ†’ -tvf

Real-world examples:

Backup home directory:

tar -czvf home-backup-$(date +%Y%m%d).tar.gz ~/

Backup with progress indicator:

tar -czvf backup.tar.gz directory/ --checkpoint=1000 --checkpoint-action=dot

Remote backup over SSH:

tar -czvf - directory/ | ssh user@server "cat > backup.tar.gz"

Extract while preserving everything:

sudo tar -xzvpf backup.tar.gz -C /
# -p = preserve permissions
# -C / = extract to root

Compression Tools

gzip/gunzip

gzip file.txt  # Compresses to file.txt.gz (deletes original)
gunzip file.txt.gz  # Decompresses (deletes .gz)

Keep original:

gzip -k file.txt
gunzip -k file.txt.gz

Compression levels:

gzip -1 file.txt  # Fastest, least compression
gzip -9 file.txt  # Slowest, best compression

View compressed file without extracting:

zcat file.txt.gz    # View contents
zless file.txt.gz   # View with pager
zgrep pattern file.txt.gz  # Search compressed file

bzip2/bunzip2

bzip2 file.txt  # Better compression than gzip
bunzip2 file.txt.bz2

Similar options to gzip (-k to keep, -1 to -9 for levels).

View compressed:

bzcat file.txt.bz2
bzless file.txt.bz2

zip/unzip

zip archive.zip file1 file2 directory/
unzip archive.zip

ZIP format (compatible with Windows):

Create archive:

# Files:
zip archive.zip file1.txt file2.txt

# Directories (recursive):
zip -r archive.zip directory/

# With compression level:
zip -9 -r archive.zip directory/  # Maximum compression

Extract archive:

# Current directory:
unzip archive.zip

# Specific directory:
unzip archive.zip -d /target/directory/

# List contents without extracting:
unzip -l archive.zip

# Extract specific file:
unzip archive.zip path/to/file.txt

Update existing archive:

zip -u archive.zip newfile.txt

Delete from archive:

zip -d archive.zip file-to-remove.txt

Password protection:

zip -e -r secure.zip directory/  # Prompts for password
unzip secure.zip  # Prompts for password

11. Bash Scripting: Automating Tasks

Bash isnโ€™t just an interactive shellโ€”itโ€™s a complete programming language for automation.

Script Basics

Create a script:

#!/bin/bash
# This is a comment

echo "Hello, World!"

Make it executable:

chmod +x script.sh

Run it:

./script.sh

The shebang: #!/bin/bash tells the system which interpreter to use.

Variables

Assignment:

name="John"
count=42
path="/home/user"

โš ๏ธ Critical: No spaces around =

name="John"   # Correct
name = "John" # Wrong! This runs 'name' as a command

Using variables:

echo "Hello, $name"
echo "Count is: $count"
echo "Path: ${path}/documents"  # Curly braces when needed

Command substitution:

current_date=$(date +%Y-%m-%d)
file_count=$(ls | wc -l)
user=$(whoami)

echo "Today is $current_date"
echo "You are $user"

Reading user input:

echo "Enter your name:"
read name
echo "Hello, $name!"

# Read with prompt:
read -p "Enter your age: " age

# Silent input (passwords):
read -sp "Enter password: " password

Environment variables:

echo $HOME     # /home/username
echo $USER     # username
echo $PATH     # Executable search path
echo $PWD      # Present working directory
echo $SHELL    # Current shell

Special Parameters

$0  # Script name
$1  # First argument
$2  # Second argument
$9  # Ninth argument
${10}  # Tenth argument (braces required for >9)

$@  # All arguments as separate strings
$*  # All arguments as single string
$#  # Number of arguments
$$  # Current process ID
$?  # Exit code of last command

Example script:

#!/bin/bash
echo "Script name: $0"
echo "First argument: $1"
echo "All arguments: $@"
echo "Number of arguments: $#"

Usage:

$ ./script.sh apple banana cherry
Script name: ./script.sh
First argument: apple
All arguments: apple banana cherry
Number of arguments: 3

Exit Codes

Every command returns an exit code:

  • 0 = Success
  • Non-zero = Error
# Check last command's exit code:
ls /existing/directory
echo $?  # Output: 0

ls /nonexistent/directory
echo $?  # Output: 2 (error code)

Using in scripts:

#!/bin/bash

if cp source.txt dest.txt; then
    echo "Copy successful"
else
    echo "Copy failed"
    exit 1  # Exit script with error code
fi

String Manipulation

text="Hello World"

# Length:
echo ${#text}  # 11

# Substring (position:length):
echo ${text:0:5}  # Hello
echo ${text:6}    # World

# Replace first occurrence:
echo ${text/World/Universe}  # Hello Universe

# Replace all occurrences:
fruit="apple apple apple"
echo ${fruit//apple/orange}  # orange orange orange

# Remove prefix:
path="/home/user/document.txt"
echo ${path#*/}  # home/user/document.txt (shortest match)
echo ${path##*/}  # document.txt (longest match - basename)

# Remove suffix:
file="document.txt.backup"
echo ${file%.*}  # document.txt (shortest match)
echo ${file%%.*}  # document (longest match)

# Uppercase/Lowercase:
text="Hello"
echo ${text^^}  # HELLO
echo ${text,,}  # hello

Conditional Statements

if [[ condition ]]; then
    # commands
elif [[ another_condition ]]; then
    # commands
else
    # commands
fi

File tests:

if [[ -e "/path/to/file" ]]; then
    echo "File exists"
fi

if [[ -f "document.txt" ]]; then
    echo "It's a regular file"
fi

if [[ -d "/home/user" ]]; then
    echo "It's a directory"
fi

if [[ -r "file.txt" ]]; then
    echo "File is readable"
fi

if [[ -w "file.txt" ]]; then
    echo "File is writable"
fi

if [[ -x "script.sh" ]]; then
    echo "File is executable"
fi

if [[ -s "file.txt" ]]; then
    echo "File is not empty"
fi

String comparisons:

if [[ "$USER" == "akib" ]]; then
    echo "Welcome, Akib"
fi

if [[ "$name" != "admin" ]]; then
    echo "Not admin"
fi

if [[ -z "$variable" ]]; then
    echo "Variable is empty"
fi

if [[ -n "$variable" ]]; then
    echo "Variable is not empty"
fi

Numeric comparisons:

if [[ $count -eq 10 ]]; then
    echo "Count is 10"
fi

if [[ $age -gt 18 ]]; then
    echo "Adult"
fi

if [[ $num -lt 100 ]]; then
    echo "Less than 100"
fi

if [[ $value -ge 50 ]]; then
    echo "50 or more"
fi

if [[ $score -le 100 ]]; then
    echo "100 or less"
fi

if [[ $result -ne 0 ]]; then
    echo "Non-zero result"
fi

Logical operators:

# AND:
if [[ $age -gt 18 && $age -lt 65 ]]; then
    echo "Working age"
fi

# OR:
if [[ "$user" == "admin" || "$user" == "root" ]]; then
    echo "Privileged user"
fi

# NOT:
if [[ ! -f "config.txt" ]]; then
    echo "Config file missing"
fi

Loops

For Loop

# Iterate over list:
for item in apple banana cherry; do
    echo "Fruit: $item"
done

# Iterate over files:
for file in *.txt; do
    echo "Processing $file"
    # Do something with $file
done

# Iterate over command output:
for user in $(cat users.txt); do
    echo "Creating account for $user"
done

# C-style loop:
for ((i=1; i<=10; i++)); do
    echo "Number: $i"
done

# Range:
for i in {1..10}; do
    echo $i
done

# Range with step:
for i in {0..100..10}; do
    echo $i  # 0, 10, 20, ..., 100
done

While Loop

# Basic while:
count=1
while [[ $count -le 5 ]]; do
    echo "Count: $count"
    ((count++))
done

# Read file line by line:
while read -r line; do
    echo "Line: $line"
done < input.txt

# Infinite loop:
while true; do
    echo "Running..."
    sleep 1
done

# Until loop (opposite of while):
count=1
until [[ $count -gt 5 ]]; do
    echo "Count: $count"
    ((count++))
done

Functions

# Define function:
function greet() {
    echo "Hello, $1!"
}

# Or without 'function' keyword:
greet() {
    echo "Hello, $1!"
}

# Call function:
greet "World"  # Output: Hello, World!

# With return value:
add() {
    local result=$(($1 + $2))
    echo $result
}

sum=$(add 5 3)
echo "Sum: $sum"  # Sum: 8

# With explicit return code:
check_file() {
    if [[ -f "$1" ]]; then
        return 0  # Success
    else
        return 1  # Failure
    fi
}

if check_file "document.txt"; then
    echo "File exists"
fi

Arrays

# Create array:
fruits=("apple" "banana" "cherry")

# Access elements:
echo ${fruits[0]}  # apple
echo ${fruits[1]}  # banana

# All elements:
echo ${fruits[@]}  # apple banana cherry

# Array length:
echo ${#fruits[@]}  # 3

# Add element:
fruits+=("date")

# Loop through array:
for fruit in "${fruits[@]}"; do
    echo $fruit
done

# Associative arrays (like dictionaries):
declare -A person
person[name]="John"
person[age]=30
person[city]="New York"

echo ${person[name]}  # John

# Loop through keys:
for key in "${!person[@]}"; do
    echo "$key: ${person[$key]}"
done

Practical Script Examples

Backup script:

#!/bin/bash

# Configuration
SOURCE="/home/user/documents"
DEST="/backup"
DATE=$(date +%Y%m%d_%H%M%S)
ARCHIVE="backup_$DATE.tar.gz"

# Create backup
echo "Starting backup..."
tar -czf "$DEST/$ARCHIVE" "$SOURCE"

if [[ $? -eq 0 ]]; then
    echo "Backup successful: $ARCHIVE"
else
    echo "Backup failed!"
    exit 1
fi

# Delete backups older than 30 days
find "$DEST" -name "backup_*.tar.gz" -mtime +30 -delete
echo "Cleanup complete"

Log analyzer:

#!/bin/bash

LOG_FILE="/var/log/apache2/access.log"

echo "=== Top 10 IP Addresses ==="
awk '{print $1}' "$LOG_FILE" | sort | uniq -c | sort -nr | head -10

echo ""
echo "=== Top 10 Requested Pages ==="
awk '{print $7}' "$LOG_FILE" | sort | uniq -c | sort -nr | head -10

echo ""
echo "=== HTTP Status Codes ==="
awk '{print $9}' "$LOG_FILE" | sort | uniq -c | sort -nr

System monitoring:

#!/bin/bash

# Check if disk usage exceeds 80%
USAGE=$(df / | tail -1 | awk '{print $5}' | sed 's/%//')

if [[ $USAGE -gt 80 ]]; then
    echo "WARNING: Disk usage is ${USAGE}%"
    # Send email, SMS, etc.
fi

# Check if service is running
if ! systemctl is-active --quiet nginx; then
    echo "ERROR: Nginx is not running"
    sudo systemctl start nginx
fi

12. Input/Output Redirection

Control where commands read input and send output.

Output Redirection

Redirect stdout:

ls -la > file_list.txt        # Overwrite
ls -la >> file_list.txt       # Append

Redirect stderr:

command 2> errors.log         # Only errors
command 2>> errors.log        # Append errors

Redirect both stdout and stderr:

command &> output.log         # Both to same file
command > output.log 2>&1     # Traditional syntax
command 2>&1 | tee output.log # Both to file and screen

Discard output:

command > /dev/null           # Discard stdout
command 2> /dev/null          # Discard stderr
command &> /dev/null          # Discard both

Understanding file descriptors:

  • 0 = stdin (standard input)
  • 1 = stdout (standard output)
  • 2 = stderr (standard error)

Swap stdout and stderr:

command 3>&1 1>&2 2>&3

Input Redirection

# Feed file as input:
sort < unsorted.txt

# Here document (multi-line input):
cat << EOF > output.txt
Line 1
Line 2
Line 3
EOF

# Here string:
grep "pattern" <<< "text to search"

Practical Examples

Separate output and errors:

./script.sh > output.log 2> errors.log

Log everything:

./script.sh &> full.log

Show and log:

./script.sh 2>&1 | tee output.log

Silent execution:

cron_job.sh &> /dev/null

13. Advanced Techniques and Power User Features

SSH Key-Based Authentication

Eliminate passwords and enhance security:

1. Generate key pair (on local machine):

ssh-keygen -t ed25519 -C "your_email@example.com"

Or RSA for older systems:

ssh-keygen -t rsa -b 4096 -C "your_email@example.com"

Press Enter to accept defaults. Optionally set a passphrase.

2. Copy public key to server:

ssh-copy-id user@server.com

Or manually:

cat ~/.ssh/id_ed25519.pub | ssh user@server "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"

3. Test:

ssh user@server.com  # No password required!

Security hardening:

Edit /etc/ssh/sshd_config on server:

PasswordAuthentication no
PermitRootLogin no
PubkeyAuthentication yes

Restart SSH:

sudo systemctl restart sshd

Terminal Multiplexers: tmux and screen

Run persistent sessions that survive disconnections.

tmux Basics

Start session:

tmux
tmux new -s session_name

Detach from session: Press Ctrl+B, then D

List sessions:

tmux ls

Attach to session:

tmux attach
tmux attach -t session_name

Essential tmux commands (prefix with Ctrl+B):

  • C: Create new window
  • N: Next window
  • P: Previous window
  • 0-9: Switch to window by number
  • %: Split pane vertically
  • ": Split pane horizontally
  • Arrow keys: Navigate between panes
  • D: Detach from session
  • X: Kill current pane
  • &: Kill current window
  • ?: Show all keybindings

Workflow example:

# SSH into server:
ssh user@server.com

# Start tmux:
tmux new -s deployment

# Run long process:
./deploy_application.sh

# Detach: Ctrl+B, then D
# Log out: exit

# Later, reconnect:
ssh user@server.com
tmux attach -t deployment
# Your process is still running!

screen Basics

Start session:

screen
screen -S session_name

Detach from session: Press Ctrl+A, then D

List sessions:

screen -ls

Attach to session:

screen -r
screen -r session_name

Essential screen commands (prefix with Ctrl+A):

  • C: Create new window
  • N: Next window
  • P: Previous window
  • 0-9: Switch to window by number
  • S: Split horizontally
  • |: Split vertically (requires configuration)
  • Tab: Switch between splits
  • D: Detach
  • K: Kill current window
  • ?: Help

Why use multiplexers:

  • Run long processes on remote servers without keeping SSH connected
  • Organize multiple terminal windows in one interface
  • Share sessions with other users (pair programming)
  • Recover from network interruptions

Advanced Find Techniques

Find and execute complex operations:

# Find files older than 30 days and compress them:
find /var/log -name "*.log" -mtime +30 -exec gzip {} \;

# Find large files and show them sorted:
find / -type f -size +100M -exec ls -lh {} \; | sort -k5 -h

# Find and move files:
find . -name "*.tmp" -exec mv {} /tmp/ \;

# Find with multiple conditions:
find . -type f \( -name "*.log" -o -name "*.txt" \) -size +1M

# Find and confirm before deleting:
find . -name "*.bak" -ok rm {} \;

# Find files modified today:
find . -type f -mtime 0

# Find files by permissions:
find . -type f -perm 777  # Exactly 777
find . -type f -perm -644  # At least 644

# Find empty files and directories:
find . -empty

# Find by owner:
find /home -user john

# Find and change permissions:
find . -type f -name "*.sh" -exec chmod +x {} \;

Advanced xargs patterns:

# Process in batches:
find . -name "*.jpg" -print0 | xargs -0 -n 10 -P 4 process_images.sh

# Build complex commands:
find . -name "*.log" | xargs -I {} sh -c 'echo "Processing {}"; gzip {}'

# Handle special characters safely:
find . -name "* *" -print0 | xargs -0 rename

# Parallel processing:
find . -name "*.txt" -print0 | xargs -0 -P 8 -I {} sh -c 'wc -l {} | tee -a count.log'

Process Management Deep Dive

Advanced process inspection:

# Show process tree:
pstree -p  # With PIDs
pstree -u  # With usernames

# Find process by name:
pgrep -f "python app.py"

# Kill by name (careful!):
pkill -f "python app.py"

# Show threads:
ps -T -p PID

# Real-time process monitoring with filtering:
watch -n 1 'ps aux | grep python'

# CPU-consuming processes:
ps aux --sort=-%cpu | head -10

# Memory-consuming processes:
ps aux --sort=-%mem | head -10

# Process with specific state:
ps aux | awk '$8 ~ /^Z/ {print}'  # Zombie processes

Nice and renice (process priority):

# Start with lower priority:
nice -n 10 ./cpu_intensive_task.sh

# Change priority of running process:
renice -n 5 -p PID

# Priority levels: -20 (highest) to 19 (lowest)
# Default: 0

Process signals:

kill -l  # List all signals

# Common signals:
kill -TERM PID  # Graceful termination (default)
kill -KILL PID  # Force kill (same as kill -9)
kill -HUP PID   # Hangup (reload config)
kill -STOP PID  # Pause
kill -CONT PID  # Resume
kill -USR1 PID  # User-defined signal 1
kill -USR2 PID  # User-defined signal 2

Advanced Text Processing Patterns

Complex awk programs:

# Print lines with specific field value:
awk '$3 > 100 && $5 == "active"' data.txt

# Calculate and format:
awk '{sum += $2} END {printf "Total: $%.2f\n", sum}' prices.txt

# Field manipulation:
awk '{print $2, $1}' file.txt | column -t  # Swap and align

# Multiple patterns:
awk '/ERROR/ {errors++} /WARNING/ {warnings++} END {print "Errors:", errors, "Warnings:", warnings}' log.txt

# Process CSV with headers:
awk -F',' 'NR==1 {for(i=1;i<=NF;i++) header[i]=$i} NR>1 {print header[1]": "$1, header[2]": "$2}' data.csv

Sed scripting:

# Multiple substitutions:
sed -e 's/old1/new1/g' -e 's/old2/new2/g' file.txt

# Conditional replacement:
sed '/pattern/s/old/new/g' file.txt

# Delete range of lines:
sed '10,20d' file.txt

# Insert line before pattern:
sed '/pattern/i\New line here' file.txt

# Append line after pattern:
sed '/pattern/a\New line here' file.txt

# Change entire line:
sed '/pattern/c\Replacement line' file.txt

# Multiple commands from file:
sed -f commands.sed input.txt

Combining tools for complex parsing:

# Extract URLs from HTML:
grep -oP 'href="\K[^"]+' page.html | sort -u

# Parse JSON (with jq):
curl -s https://api.example.com/data | jq '.items[] | select(.status=="active") | .name'

# Parse log timestamps:
awk '{print $4}' access.log | cut -d: -f1 | sort | uniq -c

# Extract email addresses:
grep -oE '\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b' file.txt

Command History Tricks

Search history:

history | grep command  # Search history
Ctrl+R                  # Reverse search (interactive)
!!                      # Repeat last command
!n                      # Run command number n
!-n                     # Run nth command from end
!string                 # Run most recent command starting with string
!?string                # Run most recent command containing string
^old^new                # Replace text in last command

History expansion:

# Reuse arguments:
!$      # Last argument of previous command
!*      # All arguments of previous command
!^      # First argument of previous command

# Example:
ls /var/log/nginx/
cd !$   # Changes to /var/log/nginx/

Configure history:

# Add to ~/.bashrc:
export HISTSIZE=10000          # Commands in memory
export HISTFILESIZE=20000      # Commands in file
export HISTTIMEFORMAT="%F %T " # Add timestamps
export HISTCONTROL=ignoredups  # Ignore duplicates
export HISTIGNORE="ls:cd:pwd"  # Ignore specific commands

# Share history across terminals:
shopt -s histappend
PROMPT_COMMAND="history -a; history -c; history -r; $PROMPT_COMMAND"

Bash Aliases and Functions

Create aliases (add to ~/.bashrc):

# Navigation shortcuts:
alias ..='cd ..'
alias ...='cd ../..'
alias ....='cd ../../..'

# Safety nets:
alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Common commands:
alias ll='ls -lah'
alias la='ls -A'
alias l='ls -CF'
alias grep='grep --color=auto'

# Git shortcuts:
alias gs='git status'
alias ga='git add'
alias gc='git commit'
alias gp='git push'

# System info:
alias ports='netstat -tulanp'
alias meminfo='free -m -l -t'
alias psg='ps aux | grep -v grep | grep -i -e VSZ -e'

# Safety:
alias mkdir='mkdir -pv'

# Reload bash config:
alias reload='source ~/.bashrc'

Create functions (more powerful than aliases):

# Extract any archive:
extract() {
    if [ -f $1 ]; then
        case $1 in
            *.tar.bz2)   tar xjf $1     ;;
            *.tar.gz)    tar xzf $1     ;;
            *.bz2)       bunzip2 $1     ;;
            *.rar)       unrar x $1     ;;
            *.gz)        gunzip $1      ;;
            *.tar)       tar xf $1      ;;
            *.tbz2)      tar xjf $1     ;;
            *.tgz)       tar xzf $1     ;;
            *.zip)       unzip $1       ;;
            *.Z)         uncompress $1  ;;
            *.7z)        7z x $1        ;;
            *)           echo "'$1' cannot be extracted" ;;
        esac
    else
        echo "'$1' is not a valid file"
    fi
}

# Create and enter directory:
mkcd() {
    mkdir -p "$1" && cd "$1"
}

# Quick backup:
backup() {
    cp "$1" "$1.backup-$(date +%Y%m%d-%H%M%S)"
}

# Find and replace in files:
replace() {
    grep -rl "$1" . | xargs sed -i "s/$1/$2/g"
}

# Show PATH one per line:
path() {
    echo "$PATH" | tr ':' '\n'
}

Performance Optimization

Benchmark commands:

# Time command execution:
time command

# More detailed:
/usr/bin/time -v command

# Benchmark alternatives:
hyperfine "command1" "command2"  # Install separately

Monitor system performance:

# I/O statistics:
iostat -x 1

# Disk activity:
iotop

# Network bandwidth:
iftop
nload

# System calls:
strace -c command

# Open files by process:
lsof -p PID

# System load:
uptime
w

Disk performance:

# Test write speed:
dd if=/dev/zero of=testfile bs=1M count=1000

# Test read speed:
dd if=testfile of=/dev/null bs=1M

# Clear cache before testing:
sudo sh -c "sync; echo 3 > /proc/sys/vm/drop_caches"

# Measure disk I/O:
sudo hdparm -Tt /dev/sda

Security Best Practices

File security:

# Find files with dangerous permissions:
find / -type f -perm -002 2>/dev/null  # World-writable files
find / -type f -perm -4000 2>/dev/null  # SUID files

# Secure SSH directory:
chmod 700 ~/.ssh
chmod 600 ~/.ssh/id_rsa
chmod 644 ~/.ssh/id_rsa.pub
chmod 644 ~/.ssh/authorized_keys
chmod 644 ~/.ssh/known_hosts

# Remove world permissions:
chmod o-rwx file

# Set restrictive umask:
umask 077  # New files: 600, directories: 700

Monitor security:

# Check for failed login attempts:
sudo grep "Failed password" /var/log/auth.log

# Show recent logins:
last

# Show currently logged-in users:
w
who

# Check for listening ports:
sudo netstat -tulpn
sudo ss -tulpn

# Review sudo usage:
sudo grep sudo /var/log/auth.log

Secure file deletion:

# Overwrite before deletion:
shred -vfz -n 3 sensitive_file.txt

# Wipe free space (use carefully):
# sfill -l /path/to/mount

Systemd Service Management

Control services:

# Start/stop/restart:
sudo systemctl start nginx
sudo systemctl stop nginx
sudo systemctl restart nginx
sudo systemctl reload nginx  # Reload config without restart

# Enable/disable (start on boot):
sudo systemctl enable nginx
sudo systemctl disable nginx

# Check status:
sudo systemctl status nginx
sudo systemctl is-active nginx
sudo systemctl is-enabled nginx

# List all services:
systemctl list-units --type=service
systemctl list-units --type=service --state=running

# View logs:
sudo journalctl -u nginx
sudo journalctl -u nginx -f  # Follow
sudo journalctl -u nginx --since "1 hour ago"
sudo journalctl -u nginx --since "2024-10-01" --until "2024-10-24"

# Failed services:
systemctl --failed

Create custom service:

# Create /etc/systemd/system/myapp.service:
[Unit]
Description=My Application
After=network.target

[Service]
Type=simple
User=myuser
WorkingDirectory=/opt/myapp
ExecStart=/opt/myapp/run.sh
Restart=on-failure
RestartSec=10

[Install]
WantedBy=multi-user.target

# Enable and start:
sudo systemctl daemon-reload
sudo systemctl enable myapp
sudo systemctl start myapp

Cron Job Automation

Edit crontab:

crontab -e      # Edit your crontab
crontab -l      # List your crontab
crontab -r      # Remove your crontab
sudo crontab -u username -e  # Edit another user's crontab

Crontab syntax:

# โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ minute (0-59)
# โ”‚ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ hour (0-23)
# โ”‚ โ”‚ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ day of month (1-31)
# โ”‚ โ”‚ โ”‚ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ month (1-12)
# โ”‚ โ”‚ โ”‚ โ”‚ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ day of week (0-6, Sunday=0)
# โ”‚ โ”‚ โ”‚ โ”‚ โ”‚
# โ”‚ โ”‚ โ”‚ โ”‚ โ”‚
# * * * * * command to execute

Common patterns:

# Every minute:
* * * * * /path/to/script.sh

# Every 5 minutes:
*/5 * * * * /path/to/script.sh

# Every hour:
0 * * * * /path/to/script.sh

# Daily at 2:30 AM:
30 2 * * * /path/to/script.sh

# Every Sunday at midnight:
0 0 * * 0 /path/to/script.sh

# First day of month:
0 0 1 * * /path/to/script.sh

# Weekdays at 6 AM:
0 6 * * 1-5 /path/to/script.sh

# Multiple times:
0 6,12,18 * * * /path/to/script.sh

# At system reboot:
@reboot /path/to/script.sh

# Special shortcuts:
@yearly     # 0 0 1 1 *
@monthly    # 0 0 1 * *
@weekly     # 0 0 * * 0
@daily      # 0 0 * * *
@hourly     # 0 * * * *

Best practices for cron:

# Use absolute paths:
0 2 * * * /usr/bin/python3 /home/user/backup.py

# Redirect output:
0 2 * * * /path/to/script.sh > /var/log/script.log 2>&1

# Set environment variables:
PATH=/usr/local/bin:/usr/bin:/bin
SHELL=/bin/bash

0 2 * * * /path/to/script.sh

# Email results (if mail configured):
MAILTO=admin@example.com
0 2 * * * /path/to/script.sh

Regular Expressions Power

Grep with regex:

# Basic patterns:
grep '^Start' file.txt       # Lines starting with "Start"
grep 'end$' file.txt         # Lines ending with "end"
grep '^$' file.txt           # Empty lines
grep '[0-9]' file.txt        # Lines with digits
grep '[A-Z]' file.txt        # Lines with uppercase
grep '[aeiou]' file.txt      # Lines with vowels

# Extended regex (-E):
grep -E 'cat|dog' file.txt   # cat OR dog
grep -E 'colou?r' file.txt   # color or colour
grep -E '[0-9]+' file.txt    # One or more digits
grep -E '[0-9]{3}' file.txt  # Exactly 3 digits
grep -E '[0-9]{2,4}' file.txt # 2 to 4 digits

# Perl regex (-P):
grep -P '\d+' file.txt       # Digits (\d)
grep -P '\w+' file.txt       # Word characters (\w)
grep -P '\s+' file.txt       # Whitespace (\s)
grep -P '(?=.*\d)(?=.*[a-z])' # Lookahead assertions

# Email addresses:
grep -E '\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b' file.txt

# IP addresses:
grep -E '\b([0-9]{1,3}\.){3}[0-9]{1,3}\b' file.txt

# URLs:
grep -E 'https?://[^\s]+' file.txt

Command Line Efficiency Tips

Keyboard shortcuts:

Ctrl+A      # Move to beginning of line
Ctrl+E      # Move to end of line
Ctrl+U      # Delete from cursor to beginning
Ctrl+K      # Delete from cursor to end
Ctrl+W      # Delete word before cursor
Alt+D       # Delete word after cursor
Ctrl+L      # Clear screen (like 'clear')
Ctrl+R      # Reverse search history
Ctrl+G      # Escape from reverse search
Ctrl+C      # Cancel current command
Ctrl+Z      # Suspend current command
Ctrl+D      # Exit shell (or send EOF)
!!          # Repeat last command
sudo !!     # Repeat last command with sudo

Quick edits:

# Fix typo in previous command:
^typo^correction

# Example:
$ grpe error log.txt
^grpe^grep
# Runs: grep error log.txt

Brace expansion:

# Create multiple files:
touch file{1..10}.txt
# Creates: file1.txt, file2.txt, ..., file10.txt

# Create directory structure:
mkdir -p project/{src,bin,lib,doc}

# Copy with backup:
cp file.txt{,.bak}
# Same as: cp file.txt file.txt.bak

# Multiple extensions:
rm file.{txt,log,bak}

Command substitution:

# Use command output in another command:
echo "Today is $(date)"
mv file.txt file.$(date +%Y%m%d).txt

# Nested:
echo "Files: $(ls $(pwd))"

14. Troubleshooting and Debugging

Common Problems and Solutions

โ€œCommand not foundโ€:

# Check if command exists:
which command_name
type command_name

# Check PATH:
echo $PATH

# Find where command is:
find / -name command_name 2>/dev/null

# Add to PATH temporarily:
export PATH=$PATH:/new/directory

# Add to PATH permanently (add to ~/.bashrc):
export PATH=$PATH:/new/directory

โ€œPermission deniedโ€:

# Check permissions:
ls -l file

# Make executable:
chmod +x script.sh

# Check ownership:
ls -l file

# Change ownership:
sudo chown user:group file

# Run with sudo:
sudo command

โ€œNo space left on deviceโ€:

# Check disk space:
df -h

# Find large directories:
du -sh /* | sort -h

# Find large files:
find / -type f -size +100M -exec ls -lh {} \;

# Clear package cache (Ubuntu/Debian):
sudo apt clean

# Clear systemd journal:
sudo journalctl --vacuum-time=7d

โ€œToo many open filesโ€:

# Check current limit:
ulimit -n

# Increase limit (temporary):
ulimit -n 4096

# Check what's using files:
lsof | wc -l
lsof -u username

# Permanent fix (edit /etc/security/limits.conf):
* soft nofile 4096
* hard nofile 8192

Process wonโ€™t die:

# Try graceful kill:
kill PID

# Wait a bit, then force:
kill -9 PID

# If still alive, check:
ps aux | grep PID

# May be zombie (can't be killed, wait for parent):
ps aux | awk '$8 ~ /^Z/'

Debugging Scripts

Enable debugging:

#!/bin/bash -x  # Print each command before executing

# Or:
set -x  # Turn on debugging
# ... commands ...
set +x  # Turn off debugging

# Strict mode (recommended):
set -euo pipefail
# -e: Exit on error
# -u: Exit on undefined variable
# -o pipefail: Pipeline fails if any command fails

Debug output:

# Add debug messages:
echo "DEBUG: variable value is $var" >&2

# Function for debug messages:
debug() {
    if [[ "${DEBUG:-0}" == "1" ]]; then
        echo "DEBUG: $*" >&2
    fi
}

# Usage:
DEBUG=1 ./script.sh

Check syntax without running:

bash -n script.sh  # Check for syntax errors

Conclusion

You now have a comprehensive guide to the Linux command line and Bash scripting, covering everything from basic navigation to advanced automation. The key to mastery is practice:

  1. Start simple: Use basic commands daily until they become second nature
  2. Build gradually: Add more complex techniques as you encounter real problems
  3. Automate repetitively: Turn repetitive tasks into scripts
  4. Read documentation: Use man pages and --help extensively
  5. Experiment safely: Use test environments or directories for practice

Remember: the command line is a skill that compounds over time. Every technique you learn builds upon the last, and soon youโ€™ll find yourself crafting elegant one-liners that would have seemed impossible when you started.

Continue learning:

  • Explore your systemโ€™s man pages
  • Read other usersโ€™ scripts on GitHub
  • Join Linux communities and forums
  • Challenge yourself with command-line puzzles
  • Build your own tools and utilities

The command line isnโ€™t just a toolโ€”itโ€™s a superpower that makes you more productive, efficient, and capable. Master it, and youโ€™ll wonder how you ever worked without it.

NFS_Server

we need to setup into two part

  • client side configuration
  • server side configuration

Server Side Configuration: -

  • To install nfs package sudo apt install nfs-utils libnfsidmap
  • Enable and start nfs service sudo systemctl enable rpcbind, nfs-server sudo systemctl start rpcbind, nfs-server, rpc-statd, nfs-idmap
  • Create a directory for nfs and give all the permission mkdir -p $HOME/Desktop/NFS-Share sudo chmod 777 ~/Desktop/NFS-Share
  • Modify the /etc/exports file and add new shared filesystem /location <IP_allow>(rw,sync,no_root_squash) exportfs -rv

Client Side Configuration:-

  • To install nfs package sudo apt install nfs-utils rpcbind
  • Enable and start the rpcbind service sudo systemctl start rpcbind
  • To stop the firewall sudo systemctl stop firewall / iptable
  • show mount from nfs server showmount -e <IP of server side>
  • Create a mount point (directory) mkdir -p /mnt/share
  • Mount the NFS file system mount <IP_server>:/location /mnt/share

Setting Up SSH Server Between PC and Server

This guide explains how to set up and configure an SSH server to enable secure communication between a client PC and a server.


Prerequisites

  1. A Linux-based PC (client) and server.
  2. SSH package installed on both machines.
  3. Network connectivity between the PC and the server.

Step-by-Step Instructions

Step 1: Install OpenSSH

On both the client and server, install the OpenSSH package:

On the Server:

sudo apt update
sudo apt install openssh-server

On the Client:

sudo apt update
sudo apt install openssh-client

Step 2: Start and Enable SSH Service

Ensure the SSH service is running on the server:

sudo systemctl start ssh
sudo systemctl enable ssh

Check the service status:

sudo systemctl status ssh

Step 3: Configure SSH on the Server

  1. Open the SSH configuration file:

    sudo nano /etc/ssh/sshd_config
    
  2. Modify or verify the following settings:

    • PermitRootLogin: Set to no for security.
    • PasswordAuthentication: Set to yes to allow password-based logins initially (you can disable it after setting up key-based authentication).
  3. Save changes and restart the SSH service:

    sudo systemctl restart ssh
    

Step 4: Determine the Serverโ€™s IP Address

Find the serverโ€™s IP address to connect from the client:

ip a

Look for the IP address under the active network interface (e.g., 192.168.x.x).


Step 5: Test SSH Connection from the Client

On the client, open a terminal and connect to the server using:

ssh username@server_ip

Replace username with the serverโ€™s username and server_ip with the actual IP address.

Example:

ssh user@192.168.1.10

**Step 6: Set Up Key-Based Authentication

  1. On the client, generate an SSH key pair:

    ssh-keygen -t rsa -b 4096
    
  2. Copy the public key to the server: on Linux

    ssh-copy-id username@server_ip
    

    on Windows go to the .ssh folder

scp $env:USERPROFILE/.ssh/id_rsa.pub username@ip:~/.ssh/authorized_keys
  1. Verify key-based login:

    ssh username@server_ip
    
  2. Disable password-based logins for added security:

    • Edit the serverโ€™s SSH configuration file:

      sudo nano /etc/ssh/sshd_config
      
    • Set PasswordAuthentication to no.

    • Restart the SSH service:

      sudo systemctl restart ssh
      

Step 7: Troubleshooting Common Issues

  • Firewall: Ensure SSH traffic is allowed through the firewall on the server:

    sudo ufw allow ssh
    sudo ufw enable
    
  • Connection Refused: Check if the SSH service is running and the correct IP address is used.

PostfixMail

Postfix Config lines

Add the following lines to /etc/postfix/main.cf

relayhost = [smtp.gmail.com]:587 myhostname= your_hostname

Location of sasl_passwd we saved smtp_sasl_password_maps = hash:/etc/postfix/sasl/sasl_passwd

Enables SASL authentication for postfix smtp_sasl_auth_enable = yes smtp_tls_security_level = encrypt

Disallow methods that allow anonymous authentication smtp_sasl_security_options = noanonymous


Create a file under /etc/postfix/sasl/

Filename: sasl_passwd

Add the below line [smtp.gmail.com]:587 email@gmail.com:password

change directory to cd /etc/postfix/sasl change the ownership sudo chown root:root * change the permission sudo chmod 600 *

Convert the sasl_passwd file into db file postmap /etc/postfix/sasl/sasl_passwd

Start the postfix service


To send an email using Linux terminal

echo โ€œTest Mailโ€ | mail -s โ€œPostfix TESTโ€ paul@gmail.com

Linux Command for System Administrator

Basic command for system monitoring

  1. sudo du -a / | sort -n -r | head -n 20 # list disk usage
  2. journalctl | grep "error" # log messages, kernel messages, and other system-related information
  3. dmesg --ctime | grep error # Show kernel ring buffer
  4. sudo journalctl -p 3 -xb # Check the lock file
  5. sudo systemctl --failed # Service that failed to load
  6. du -sh .config # File size of a specific directory
  7. find . -type f -exec grep -l "/dev/nvme0n1" {} + # find file and exce command with grep, {} is the output for each & +,\; is the terminator

Reset password (for forgotten password)

Reser Root

  1. init=/bin/bash # from grub commnad mode, find the kernel line containing linux then add at the end of linux command
  2. ctrl+x or F10 # to save the changes and boot
  3. mount -o remount,rw / # mount the root system for (r,w)
  4. passwd # change the password
  5. reboot -f # force reboot, after that you can boot with new passwd you set

Reser User

  1. rw init=/bin/bash # from grub commnad mode, find the kernel line containing linux then add at the end of linux command
  2. ctrl+x or F10 # to save the changes and boot
  3. passwd username # change the password
  4. reboot -f # force reboot, after that you can boot with new passwd you set

Some useful Command

  1. grep -Irl "akib" . # this will find file contain akib in the current dir
  2. grep -A 3 -B 3 "nvme" flake.nix # this will find nvme in flake.nix file with before and after with 3,3 line
  3. sed -i "s/akib/withNewText/g" file.txt # sed will changes the all occurance with new text in a file
  4. cat /etc/passwd | column -t -s ":" -N USERNAME,PW,UID,GUID,COMMENT,HOME,INTERPRETER -J -n passwdFile # will seperate passwd file based on โ€œ:โ€ delemeter and show as column format then the โ€œ-Jโ€ flag will convert it into json file
  5. cat /etc/passwd | awk -F: 'BEGIN {printf "user\tPW\tUID\tGUID\tCOMMENT\tHOME\tINTERPRETER\n"} {printf "%s\t%s\t%s\t%s\t%s\t%s\t%s\n", $1, $2, $3, $4, $5, $6, $7}'
  6. cat /etc/passwd | column -t -s ":" -N USERNAME,PW,UID,GUID,COMMENT,HOME,INTERPRETER -H PW -O UID,USERNAME,GUID,COMMENT,HOME,INTERPRETER # โ€œ-Hโ€ remove specific column โ€œ-Oโ€ reorder column

Basic

๐Ÿ› ๏ธ Common Ports & Protocols Cheat Sheet

A quick reference for well-known TCP/UDP ports and their usage. Useful for students, professionals, and anyone studying for certifications like CCNA, CompTIA, or Security+.


๐Ÿ“Œ Well-Known / System Ports (0 โ€“ 1023)

PortServiceProtocolDescription
7EchoTCP, UDPEcho service
19CHARGENTCP, UDPCharacter Generator Protocol (rarely used, vulnerable)
20FTP-dataTCP, SCTPFile Transfer Protocol (data)
21FTPTCP, UDP, SCTPFile Transfer Protocol (control)
22SSH/SCP/SFTPTCP, UDP, SCTPSecure Shell, secure logins, file transfers, port forwarding
23TelnetTCPUnencrypted text communication
25SMTPTCPSimple Mail Transfer Protocol (email routing)
53DNSTCP, UDPDomain Name System
67DHCP/BOOTPUDPDHCP Server
68DHCP/BOOTPUDPDHCP Client
69TFTPUDPTrivial File Transfer Protocol
80HTTPTCP, UDP, SCTPWeb traffic (HTTP/1.x, HTTP/2 over TCP; HTTP/3 uses QUIC/UDP)
88KerberosTCP, UDPNetwork authentication system
110POP3TCPPost Office Protocol (email retrieval)
123NTPUDPNetwork Time Protocol
135Microsoft RPC EPMAPTCP, UDPRemote Procedure Call Endpoint Mapper
137-139NetBIOSTCP, UDPNetBIOS services (name service, datagram, session)
143IMAPTCP, UDPInternet Message Access Protocol
161-162SNMPUDPSimple Network Management Protocol (unencrypted)
179BGPTCPBorder Gateway Protocol
389LDAPTCP, UDPLightweight Directory Access Protocol
443HTTPSTCP, UDP, SCTPSecure web traffic (SSL/TLS)
445Microsoft DS SMBTCP, UDPFile sharing, Active Directory
465SMTPSTCPSMTP over SSL/TLS
514SyslogUDPSystem log protocol
520RIPUDPRouting Information Protocol
546-547DHCPv6UDPDHCP for IPv6 (client/server)
636LDAPSTCP, UDPLDAP over SSL
993IMAPSTCPIMAP over SSL/TLS
995POP3STCP, UDPPOP3 over SSL/TLS

๐Ÿ“Œ Registered Ports (1024 โ€“ 49151)

PortServiceProtocolDescription
1025Microsoft RPCTCPRPC service
1080SOCKS proxyTCP, UDPProxy protocol
1194OpenVPNTCP, UDPVPN tunneling
1433MS-SQL ServerTCPMicrosoft SQL Server
1521Oracle DBTCPOracle Database listener
1701L2TPTCPLayer 2 Tunneling Protocol
1720H.323TCPVoIP signaling
1723PPTPTCP, UDPVPN protocol (deprecated)
1812-1813RADIUSUDPAuthentication, accounting
2049NFSUDPNetwork File System
2082-2083cPanelTCP, UDPWeb hosting control panel
2222DirectAdminTCPHosting control panel
2483-2484Oracle DBTCP, UDPInsecure & SSL listener
3074Xbox LiveTCP, UDPOnline gaming
3128HTTP ProxyTCPCommon proxy port
3260iSCSI TargetTCP, UDPStorage protocol
3306MySQLTCPDatabase system
3389RDPTCPWindows Remote Desktop
3690SVNTCP, UDPApache Subversion
3724World of WarcraftTCP, UDPGaming
4333mSQLTCPMini SQL
4444Blaster WormTCP, UDPMalware
5000UPnPTCPUniversal Plug & Play
5060-5061SIPTCP, UDPSession Initiation Protocol (VoIP)
5222-5223XMPPTCP, UDPMessaging protocol
5432PostgreSQLTCPDatabase system
5900-5999VNCTCP, UDPRemote desktop (VNC)
6379RedisTCPIn-memory database
6665-6669IRCTCPInternet Relay Chat
6881-6999BitTorrentTCP, UDPFile sharing
8080HTTP Proxy/AltTCPAlternate web port
8443HTTPS AltTCPAlternate secure web port
9042CassandraTCPNoSQL database
9100Printer (PDL)TCPPrint Data Stream

๐Ÿ“Œ Dynamic / Private Ports (49152 โ€“ 65535)

These are used for ephemeral connections and custom apps. Safe to use for internal development/testing.


๐ŸŽฏ Most Common Ports for Exams

If youโ€™re preparing for CCNA / CompTIA exams, focus on these:

PortService
7Echo
20, 21FTP
22SSH/SCP
23Telnet
25SMTP
53DNS
67, 68DHCP
69TFTP
80HTTP
88Kerberos
110POP3
123NTP
137-139NetBIOS
143IMAP
161, 162SNMP
389LDAP
443HTTPS
445SMB
636LDAPS
3389RDP
5060-5061SIP (VoIP)

โœ… Conclusion

Familiarity with ports & protocols is essential for:

  • Building secure applications
  • Troubleshooting network issues
  • Passing certification exams

Keep this cheat sheet handy as a quick reference!

IPv4 Subnetting Cheat Sheet

Subnetting is one of the most fundamental yet challenging concepts in networking. This cheat sheet provides quick references to help you master IPv4 subnetting for certifications, administration, and network design.


IPv4 Subnets

Subnetting allows a host to determine if the destination machine is local or remote. The subnet mask determines how many IPv4 addresses are assignable within a network.

CIDRSubnet Mask# of AddressesWildcard
/32255.255.255.25510.0.0.0
/31255.255.255.25420.0.0.1
/30255.255.255.25240.0.0.3
/29255.255.255.24880.0.0.7
/28255.255.255.240160.0.0.15
/27255.255.255.224320.0.0.31
/26255.255.255.192640.0.0.63
/25255.255.255.1281280.0.0.127
/24255.255.255.02560.0.0.255
/23255.255.254.05120.0.1.255
/22255.255.252.010240.0.3.255
/21255.255.248.02,0480.0.7.255
/20255.255.240.04,0960.0.15.255
/19255.255.224.08,1920.0.31.255
/18255.255.192.016,3840.0.63.255
/17255.255.128.032,7680.0.127.255
/16255.255.0.065,5360.0.255.255
/15255.254.0.0131,0720.1.255.255
/14255.252.0.0262,1440.3.255.255
/13255.248.0.0524,2880.7.255.255
/12255.240.0.01,048,5760.15.255.255
/11255.224.0.02,097,1520.31.255.255
/10255.192.0.04,194,3040.63.255.255
/9255.128.0.08,388,6080.127.255.255
/8255.0.0.016,777,2160.255.255.255
/7254.0.0.033,554,4321.255.255.255
/6252.0.0.067,108,8643.255.255.255
/5248.0.0.0134,217,7287.255.255.255
/4240.0.0.0268,435,45615.255.255.255
/3224.0.0.0536,870,91231.255.255.255
/2192.0.0.01,073,741,82463.255.255.255
/1128.0.0.02,147,483,648127.255.255.255
/00.0.0.04,294,967,296255.255.255.255

Decimal to Binary Conversion

IPv4 addresses are actually 32-bit binary numbers. Subnet masks in binary show which part is the network and which part is the host.

Subnet MaskBinaryWildcardBinary Wildcard
2551111 111100000 0000
2541111 111010000 0001
2521111 110030000 0011
2481111 100070000 0111
2401111 0000150000 1111
2241110 0000310001 1111
1921100 0000630011 1111
1281000 00001270111 1111
00000 00002551111 1111

Why Learn Binary?

  • 1 = Network portion
  • 0 = Host portion
  • Subnet masks must have all ones followed by all zeros.

Example: A /24 (255.255.255.0) subnet reserves 24 bits for network and 8 bits for hosts โ†’ 254 usable IPs.

/28 Example: If ISP gives 199.44.6.80/28, you calculate host addresses by binary increments โ†’ usable range = .81 - .94.


IPv4 Address Classes

ClassRange
A0.0.0.0 โ€“ 127.255.255.255
B128.0.0.0 โ€“ 191.255.255.255
C192.0.0.0 โ€“ 223.255.255.255
D224.0.0.0 โ€“ 239.255.255.255
E240.0.0.0 โ€“ 255.255.255.255

Reserved (Private) Ranges

Range TypeIP Range
Class A10.0.0.0 โ€“ 10.255.255.255
Class B172.16.0.0 โ€“ 172.31.255.255
Class C192.168.0.0 โ€“ 192.168.255.255
Localhost127.0.0.0 โ€“ 127.255.255.255
Zeroconf (APIPA)169.254.0.0 โ€“ 169.254.255.255

Key Terminology

  • Wildcard Mask: Indicates available address bits for matching.
  • CIDR: Classless Inter-Domain Routing, uses /XX notation.
  • Network Portion: Fixed part of IP determined by subnet mask.
  • Host Portion: Variable part of IP usable for devices.

Conclusion

IPv4 subnetting can seem complex, but with practice and binary understanding, it becomes second nature. Keep this sheet handy for quick reference during exams, troubleshooting, or design work.

Tools

curl Cheat Sheet

Quick: Practical, consultant-style reference for using curl โ€” from basic GETs to file uploads, API interactions, cookies, scripting tips and advanced flags. Friendly tone, focused on getting you productive fast.


Table of contents

  1. What is curl?
  2. Quick examples โ€” Web browsing & headers
  3. Downloading files
  4. GET requests
  5. POST requests & forms
  6. API interaction & headers
  7. File uploads with --form / -F
  8. Cookies and sessions
  9. Scripting with curl
  10. Advanced & debugging flags
  11. Partial downloads & ranges
  12. Helpful one-line examples
  13. Etiquette & safety note

What is curl?

curl (client URL) is a command-line tool for transferring data with URL syntax. It supports many protocols (HTTP/S, FTP, SCP, SMTP, IMAP, POP3, etc.) and is ideal for quick checks, automation, scripting, API calls, and sometimes creative (or mischievous) automation.

Use it when you need protocol-level control from the terminal.


Quick examples โ€” Web browsing & headers

CommandDescription
curl http://example.comPrint HTML body of http://example.com to stdout
curl --list-only "http://example.com/dir/" (-l)List directory contents (if server allows)
curl --location URL (-L)Follow 3xx redirects
curl --head URL (-I)Fetch HTTP response headers only
curl --head --show-error URLHeaders and errors (helpful for down/unresponsive hosts)

Downloading files

CommandNotes
curl --output hello.html http://example.com (-o)Save output to hello.html
curl --remote-name URL (-O)Save file using remote filename
curl --remote-name URL --output newnameDownload then rename locally
curl --remote-name --continue-at - URLResume partial download (if server supports ranges)
curl "https://site/{a,b,c}.html" --output "file_#1.html"Download multiple variants using brace expansion and # placeholders

Batch download pattern (extract links then download):

curl -L http://example.com/list/ | grep '\.mp4' | cut -d '"' -f 8 | while read i; do curl http://example.com/${i} -O; done

(Adjust grep/cut to the page structure.)


GET requests

CommandDescription
curl --request GET "http://example.com" (-X GET)Explicit GET request (usually optional)
curl -s -w '%{remote_ip} %{time_total} %{http_code}\n' -o /dev/null URLSilent mode with custom output: IP, total time, HTTP code

Example: fetch a JSON API (may require headers or tokens):

curl -X GET 'https://api.example.com/items?filter=all' -H 'Accept: application/json'

POST requests & forms

CommandDescription
curl --request POST URL -d 'key=value' (-X POST -d)Send URL-encoded data in the request body
curl -H 'Content-Type: application/json' --data-raw '{"k":"v"}' URLSend raw JSON payload (set content-type)

Examples with MongoDB Data API (illustrative):

# Insert document
curl --request POST 'https://data.mongodb-api/.../insertOne' \
  --header 'Content-Type: application/json' \
  --header 'api-key: YOUR_KEY' \
  --data-raw '{"dataSource":"Cluster0","database":"db","collection":"c","document":{ "name":"Alice" }}'

# Find one
curl --request POST 'https://data.mongodb-api/.../findOne' \
  --header 'Content-Type: application/json' \
  --header 'api-key: YOUR_KEY' \
  --data-raw '{"filter":{"name":"Alice"}}'

API interaction & headers

CommandDescription
-H / --headerAdd custom HTTP header (Auth tokens, Content-Type, Accept, etc.)
curl --header "Auth-Token:$TOKEN" URLPass bearer or custom tokens in headers
curl --user username:password URLBasic auth (-u username:password)

Examples:

curl -H 'Authorization: Bearer $TOKEN' -H 'Accept: application/json' https://api.example.com/me
curl -u 'user:password' 'https://example.com/protected'

File uploads with --form / -F

Use -F to emulate HTML form file uploads (multipart/form-data).

CommandDescription
curl --form "file=@/path/to/file" URLUpload file (use @ for relative or @/abs/path)
curl --form "field=value" --form "file=@/path" URLMix fields and files in one request

Notes:

  • If the file is in the current directory, you can use @filename.
  • If you supply an absolute path, omit the @ and pass field=/full/path.

Examples:

curl -F "email=test@me.com" -F "submit=Submit" 'https://docs.google.com/forms/d/e/FORM_ID/formResponse' > output.html
curl -F "entry.123456789=@/Users/me/pic.jpg" 'https://example.com/upload' > response.html

Cookies and sessions

CommandDescription
curl --cookie "name=val;name2=val2" URL (-b)Send cookie(s) inline
curl --cookie cookies.txt URLLoad cookies from file (cookies.txt with k=v;... format)
curl --cookie-jar mycookies.txt URL (-c)Save cookies received into mycookies.txt
curl --dump-header headers.txt URL (-D)Dump response headers (includes Set-Cookie)

Cookie file format (simple):

key1=value1;key2=value2

Scripting with curl

curl is a natural fit for bash automation. Example script patterns:

  • Reusable function wrapper for API calls (add auth header once)
  • Download + checksum verification loop
  • Rate-limited loops for polite scraping (sleep between requests)

Example: simple reusable function

api_get(){
  local endpoint="$1"
  curl -s -H "Authorization: Bearer $API_KEY" "https://api.example.com/${endpoint}"
}

api_get "items"

Advanced & debugging flags

FlagPurpose
-hShow help
--versionShow curl version and features
-vVerbose (request/response)
--trace filenameDetailed trace of operations and data
-sSilent mode (no progress meter)
-SShow error when used with -s
-LFollow redirects
--connect-timeoutSeconds to wait for TCP connect
-m / --max-timeMax operation time in seconds
-w / --write-outPrint variables after completion (%{http_code}, %{time_total}, %{remote_ip}, etc.)

Examples:

curl -v https://example.com
curl --trace trace.txt https://twitter.com/
curl -s -w '%{remote_ip} %{time_total} %{http_code}\n' -o /dev/null http://ankush.io
curl -L 'https://short.url' --connect-timeout 0.1

Partial downloads & ranges

Use -r to request byte ranges from HTTP/FTP responses (helpful for resuming or grabbing file snippets).

CommandNotes
curl -r 0-99 http://example.comFirst 100 bytes
curl -r -500 http://example.comLast 500 bytes
curl -r 0-99 ftp://ftp.example.comRanges on FTP (explicit start/end required)

Helpful one-line examples

# Show headers only
curl -I https://example.com

# Save response to file quietly
curl -sL https://example.com -o page.html

# POST JSON and pretty-print reply (using jq)
curl -s -H "Content-Type: application/json" -d '{"name":"A"}' https://api.example.com/insert | jq

# Upload file with field name "file"
curl -F "file=@./image.jpg" https://api.example.com/upload

# Send cookies from file and save response headers
curl -b cookies.txt -D headers.txt https://example.com

# Send URL-encoded form field
curl -d "field1=value1&field2=value2" -X POST https://form-endpoint

Request example (SMS via textbelt โ€” use responsibly)

curl -X POST https://textbelt.com/text \
  --data-urlencode phone='+[E.164 number]' \
  --data-urlencode message='Please delete this message.' \
  -d key=textbelt

Response example: {"success":true,...} (service-dependent)


Etiquette & safety note

  • Only target servers or forms you own or have explicit permission to test. Abuse (flooding, unauthorized automation, fraud) is illegal and unethical.
  • Prefer --connect-timeout and rate-limiting in scripts to avoid hammering servers.
  • Keep secrets out of command history โ€” use environment variables or --netrc where appropriate.

Nmap Cheat Sheet

Quick: A concise, practical reference for common Nmap workflows โ€” target selection, scan types, discovery, NSE usage, output handling, evasion tricks, and useful one-liners. Designed like a consultantโ€™s quick-reference: organized by category so you can scan and apply fast.


Table of contents

  1. Overview & Usage Tips
  2. Target Specification
  3. Scan Techniques
  4. Host Discovery
  5. Port Specification
  6. Service & Version Detection
  7. OS Detection
  8. Timing & Performance
  9. Timing Tunables
  10. NSE (Nmap Scripting Engine)
  11. Useful NSE Examples
  12. Firewall / IDS Evasion & Spoofing
  13. Output Formats & Options
  14. Helpful Output Examples & Pipelines
  15. Miscellaneous Flags & Other Commands
  16. Practical Tips & Etiquette

Overview & Usage Tips

  • Run Nmap as root (or with sudo) for the most feature-complete scans (e.g., SYN -sS, raw packets, OS detection).
  • Start with discovery (-sn) and light scans (-T3 -F -sV) to find live hosts before aggressive options.
  • Log results (-oA) so you can re-analyze and resume scans later.
  • Respect scope & permissions โ€” scanning networks you donโ€™t own can be illegal.

Target Specification

Define which IPs/ranges/subnets Nmap should scan.

Switch / SyntaxExampleDescription
Single IPnmap 192.168.1.1Scan a single host
Multiple IPsnmap 192.168.1.1 192.168.2.1Scan specific hosts
Rangenmap 192.168.1.1-254Scan an IP range
Domainnmap scanme.nmap.orgScan a hostname
CIDRnmap 192.168.1.0/24CIDR subnet scan
-iLnmap -iL targets.txtRead targets from file
-iRnmap -iR 100Scan 100 random hosts
--excludenmap --exclude 192.168.1.1Exclude host(s) from scan

Nmap Scan Techniques

Pick based on stealth, permissions, and speed.

SwitchExampleDescription
-sSnmap 192.168.1.1 -sSTCP SYN scan (stealthy; default with privileges)
-sTnmap 192.168.1.1 -sTTCP connect() scan (no raw socket required)
-sUnmap 192.168.1.1 -sUUDP scan
-sAnmap 192.168.1.1 -sAACK scan (firewall mapping)
-sWnmap 192.168.1.1 -sWWindow scan
-sMnmap 192.168.1.1 -sMMaimon scan
-Anmap 192.168.1.1 -AAggressive โ€” OS, version, scripts, traceroute

Host Discovery

Find out which hosts are up before scanning ports or when skipping port scans.

SwitchExampleDescription
-sLnmap 192.168.1.1-3 -sLList scan โ€” do not send probes (target listing only)
-snnmap 192.168.1.1/24 -snPing / host discovery only (no port scan)
-Pnnmap 192.168.1.1-5 -PnSkip host discovery (treat all hosts as up)
-PSnmap 192.168.1.1-5 -PS22-25,80TCP SYN discovery on specified ports (80 default)
-PAnmap 192.168.1.1-5 -PA22-25,80TCP ACK discovery on specified ports (80 default)
-PUnmap 192.168.1.1-5 -PU53UDP discovery on specified ports (40125 default)
-PRnmap 192.168.1.0/24 -PRARP discovery (local nets only)
-nnmap 192.168.1.1 -nNever perform DNS resolution

Port Specification

Target specific ports, ranges, or mixed TCP/UDP sets.

SwitchExampleDescription
-pnmap 192.168.1.1 -p 21Scan single port
-pnmap 192.168.1.1 -p 21-100Scan port range
-pnmap 192.168.1.1 -p U:53,T:21-25,80Mix UDP and TCP ports
-p-nmap 192.168.1.1 -p-Scan all TCP ports (1โ€“65535)
Service namesnmap 192.168.1.1 -p http,httpsUse service names instead of numbers
-Fnmap 192.168.1.1 -FFast scan โ€” top 100 ports
--top-portsnmap 192.168.1.1 --top-ports 2000Scan top N ports by frequency
-p0- / -p-65535nmap 192.168.1.1 -p0-Open-ended ranges; -p0- will scan from 0 to 65535

Service & Version Detection

Try to identify the service and its version running on discovered ports.

SwitchExampleDescription
-sVnmap 192.168.1.1 -sVService/version detection
-sV --version-intensitynmap 192.168.1.1 -sV --version-intensity 8Intensity 0โ€“9. Higher = more probing
--version-lightnmap 192.168.1.1 -sV --version-lightLighter/faster detection (less reliable)
--version-allnmap 192.168.1.1 -sV --version-allFull (intensity 9) detection
-Anmap 192.168.1.1 -AIncludes -sV, OS detection, NSE scripts, traceroute

OS Detection

Fingerprint the target TCP/IP stack to guess the OS.

SwitchExampleDescription
-Onmap 192.168.1.1 -ORemote OS detection (TCP/IP fingerprinting)
--osscan-limitnmap 192.168.1.1 -O --osscan-limitSkip OS detection unless ports show open/closed pattern
--osscan-guessnmap 192.168.1.1 -O --osscan-guessBe more aggressive about guesses
--max-os-triesnmap 192.168.1.1 -O --max-os-tries 1Limit how many OS probe attempts are made
-Anmap 192.168.1.1 -AOS detection included with -A

Timing & Performance

Built-in timing templates trade off speed vs stealth.

SwitchExampleDescription
-T0nmap 192.168.1.1 -T0Paranoid โ€” max IDS evasion (very slow)
-T1nmap 192.168.1.1 -T1Sneaky โ€” IDS evasion
-T2nmap 192.168.1.1 -T2Polite โ€” reduce bandwidth/CPU usage
-T3nmap 192.168.1.1 -T3Normal (default)
-T4nmap 192.168.1.1 -T4Aggressive โ€” faster but noisier
-T5nmap 192.168.1.1 -T5Insane โ€” assumes very fast, reliable network

Timing Tunables (Fine Control)

Adjust timeouts, parallelism, rates and retries.

  • --host-timeout <time> โ€” give up on a host after this time (e.g., --host-timeout 2m).
  • --min-rtt-timeout, --max-rtt-timeout, --initial-rtt-timeout <time> โ€” control probe RTT timeouts.
  • --min-hostgroup, --max-hostgroup <size> โ€” group size for parallel host scanning.
  • --min-parallelism, --max-parallelism <num> โ€” probe parallelization controls.
  • --max-retries <tries> โ€” maximum retransmissions.
  • --min-rate <n> / --max-rate <n> โ€” packet send rate bounds.

Examples:

nmap --host-timeout 4m --max-retries 2 192.168.1.1
nmap --min-rate 100 --max-rate 1000 -p- 192.168.1.0/24

NSE (Nmap Scripting Engine)

Use scripts to automate checks, fingerprinting, vulnerability discovery and enumeration.

SwitchExampleNotes
-sCnmap 192.168.1.1 -sCRun default safe scripts (convenient discovery)
--scriptnmap 192.168.1.1 --script http*Run scripts by name or wildcard
--script <script1>,<script2>nmap --script banner,http-titleRun specific scripts
--script-argsnmap --script snmp-sysdescr --script-args snmpcommunity=publicProvide args to scripts
--script "not intrusive"nmap --script "default and not intrusive"Compose script sets (example)

Useful NSE Examples

A few practical one-liners to keep handy.

# Generate sitemap from web server (HTTP):
nmap -Pn --script=http-sitemap-generator scanme.nmap.org

# Fast random search for web servers:
nmap -n -Pn -p 80 --open -sV -vvv --script banner,http-title -iR 1000

# Brute-force DNS hostnames (subdomain guessing):
nmap -Pn --script=dns-brute domain.com

# Safe SMB enumeration (useful on internal networks):
nmap -n -Pn -vv -O -sV --script smb-enum*,smb-ls,smb-mbenum,smb-os-discovery,smb-vuln* 192.168.1.1

# Whois queries via scripts:
nmap --script whois* domain.com

# Detect XSS-style unsafe output escaping on HTTP port 80:
nmap -p80 --script http-unsafe-output-escaping scanme.nmap.org

# Check for SQL injection (scripted):
nmap -p80 --script http-sql-injection scanme.nmap.org

Firewall / IDS Evasion & Spoofing

Techniques to make traffic less obvious. Use responsibly.

SwitchExampleDescription
-fnmap 192.168.1.1 -fFragment packets (can evade some filters)
--mtunmap 192.168.1.1 --mtu 32Set MTU/fragment size
-Dnmap -D decoy1,decoy2,ME,decoy3 targetDecoy IP addresses to confuse observers
-Snmap -S 1.2.3.4 targetSpoof source IP (may require raw sockets)
-gnmap -g 53 targetSet source port (useful to bypass simple filters)
--proxiesnmap --proxies http://192.168.1.1:8080 targetRelay scans through HTTP/SOCKS proxies
--data-lengthnmap --data-length 200 targetAppend random data to packets

Example IDS evasion command

nmap -f -T0 -n -Pn --data-length 200 -D 192.168.1.101,192.168.1.102,192.168.1.103,192.168.1.23 192.168.1.1

Output Formats & Options

Save scans so you can analyze later or process programmatically.

SwitchExampleDescription
-oNnmap 192.168.1.1 -oN normal.fileNormal human-readable output file
-oXnmap 192.168.1.1 -oX xml.fileXML output (good for parsing)
-oGnmap 192.168.1.1 -oG grep.fileGrepable output (legacy)
-oAnmap 192.168.1.1 -oA resultsWrite results.nmap, results.xml, results.gnmap
-oG -nmap 192.168.1.1 -oG -Print grepable to stdout
--append-outputnmap -oN file -append-outputAppend to an existing file
-v / -vvnmap -vIncrease verbosity
-d / -ddnmap -dIncrease debugging info
--reasonnmap --reasonShow reason a port state was classified
--opennmap --openShow only open or possibly-open ports
--packet-tracenmap --packet-traceShow raw packet send/receive detail
--iflistnmap --iflistList interfaces and routes
--resumenmap --resume results.fileResume an interrupted scan (requires prior save)

Helpful Output Examples & Pipelines

Combine Nmap with standard UNIX tools to extract actionable info.

# Find web servers (HTTP):
nmap -p80 -sV -oG - --open 192.168.1.0/24 | grep open

# Generate list of live hosts from random scan (XML -> grep -> cut):
nmap -iR 10 -n -oX out.xml | grep "Nmap" | cut -d " " -f5 > live-hosts.txt

# Append hosts from second scan:
nmap -iR 10 -n -oX out2.xml | grep "Nmap" | cut -d " " -f5 >> live-hosts.txt

# Compare two scans:
ndiff scan1.xml scan2.xml

# Convert XML to HTML:
xsltproc nmap.xml -o nmap.html

# Frequency of open ports (clean and aggregate):
grep " open " results.nmap | sed -r 's/ +/ /g' | sort | uniq -c | sort -rn | less

Miscellaneous Flags

SwitchExampleDescription
-6nmap -6 2607:f0d0:1002:51::4Enable IPv6 scanning
-hnmap -hShow help screen

Other Useful Commands (Mixed Examples)

# Discovery only on specific TCP ports, no port scan:
nmap -iR 10 -PS22-25,80,113,1050,35000 -v -sn

# ARP-only discovery on local net, verbose, no port scan:
nmap 192.168.1.0/24 -PR -sn -vv

# Traceroute to random targets (no ports):
nmap -iR 10 -sn --traceroute

# List targets only but use internal DNS server:
nmap 192.168.1.1-50 -sL --dns-server 192.168.1.1

# Show packet details during scan:
nmap 192.168.1.1 --packet-trace

Practical Tips & Etiquette

  • Always have written permission to scan networks you do not own.
  • Start small: discovery -> targeted port scan -> version detection -> scripts.
  • Use --script carefully; some scripts are intrusive.
  • Keep a log of what you scanned and when (timestamps help with audits).
  • For large networks, break scans into chunks and use --min-rate/--max-rate to control load.

Appendix โ€” Quick Command Generator (Examples)

  • nmap -sS -p 1-100 -T4 -oA quick-scan 192.168.1.0/24 โ€” fast SYN scan of top 100 ports, save outputs.
  • nmap -Pn -sV --script=vuln -oX vuln-check.xml 10.0.0.5 โ€” skip host discovery, version & vulnerability scripts.

SSH Cheat Sheet

Whether you need a quick recap of SSH commands or youโ€™re learning SSH from scratch, this guide will help. SSH is a must-have tool for network administrators and anyone who needs to log in to remote systems securely.


๐Ÿ”‘ What Is SSH?

SSH (Secure Shell / Secure Socket Shell) is a network protocol that allows secure access to network services over unsecured networks.

Key tools included in the suite:

  • ssh-keygen โ†’ Create SSH authentication key pairs.
  • scp (Secure Copy Protocol) โ†’ Copy files securely between hosts.
  • sftp (Secure File Transfer Protocol) โ†’ Securely send/receive files.

By default, an SSH server listens on TCP port 22.


๐Ÿ“ Basic SSH Commands

CommandDescription
ssh user@hostConnect to remote server
ssh pi@raspberryConnect as pi on default port 22
ssh pi@raspberry -p 3344Connect on custom port 3344
ssh -i /path/file.pem admin@192.168.1.1Connect using private key file
ssh root@192.168.2.2 'ls -l'Execute remote command
ssh user@192.168.3.3 bash < script.shRun script remotely
ssh friend@Best.local "tar cvzf - ~/ffmpeg" > output.tgzDownload compressed directory

๐Ÿ” Key Management

CommandDescription
ssh-keygenGenerate SSH keys
ssh-keygen -F [host]Find entry in known_hosts
ssh-keygen -R [host]Remove entry from known_hosts
ssh-keygen -y -f private.key > public.pubGenerate public key from private
ssh-keygen -t rsa -b 4096 -C "email@example.com"Generate new RSA 4096-bit key

๐Ÿ“‚ File Transfers

SCP (Secure Copy)

CommandDescription
scp user@server:/file dest/Copy remote โ†’ local
scp file user@server:/pathCopy local โ†’ remote
scp user1@server1:/file user2@server2:/pathCopy between two servers
scp -r user@server:/folder dest/Copy directory recursively
scp -P 8080 file user@server:/pathConnect on port 8080
scp -CEnable compression
scp -vVerbose output

SFTP (Secure File Transfer)

CommandDescription
sftp user@serverConnect to server via SFTP
sftp -P 8080 user@serverConnect on port 8080
sftp -r dir user@server:/pathRecursively transfer directory

โš™๏ธ SSH Configurations & Options

CommandDescription
man ssh_configSSH client configuration manual
cat /etc/ssh/ssh_configView system-wide SSH client config
cat /etc/ssh/sshd_configView system-wide SSH server config
cat ~/.ssh/configView user-specific config
cat ~/.ssh/known_hostsView logged-in hosts

SSH Agent & Keys

CommandDescription
ssh-agentStart agent to hold private keys
ssh-add ~/.ssh/id_rsaAdd key to agent
ssh-add -lList cached keys
ssh-add -DDelete all cached keys
ssh-copy-id user@serverCopy keys to remote server

๐Ÿ–ฅ๏ธ Remote Server Management

After logging into a remote server:

  • cd โ†’ Change directory
  • ls โ†’ List files
  • mkdir โ†’ Create directory
  • mv โ†’ Move/rename files
  • nano/vim โ†’ Edit files
  • ps โ†’ List processes
  • kill โ†’ Stop process
  • top โ†’ Monitor resources
  • exit โ†’ Close SSH session

๐Ÿš€ Advanced SSH Commands

X11 Forwarding (GUI Apps over SSH)

  • Client ~/.ssh/config:

    Host *
      ForwardAgent yes
      ForwardX11 yes
    
  • Server /etc/ssh/sshd_config:

    X11Forwarding yes
    X11DisplayOffset 10
    X11UseLocalhost no
    
CommandDescription
sshfs user@server:/path /local/mountMount remote filesystem locally
ssh -C user@hostEnable compression
ssh -X user@serverEnable X11 forwarding
ssh -Y user@serverEnable trusted X11 forwarding

๐Ÿ”’ SSH Tunneling

Local Port Forwarding -L

ssh -L local_port:destination:remote_port user@server

Example: ssh -L 2222:10.0.1.5:3333 root@192.168.0.1

Remote Port Forwarding -R

ssh -R remote_port:destination:destination_port user@server

Example: ssh -R 8080:192.168.3.8:3030 -N -f user@remote.host

Dynamic Port Forwarding -D (SOCKS Proxy)

ssh -D 6677 -q -C -N -f user@host

ProxyJump -J (Bastion Host)

ssh -J user@proxy_host user@target_host

๐Ÿ›ก๏ธ Security Best Practices

  • Disable unused features: AllowTcpForwarding no, X11Forwarding no.
  • Change default port from 22 to something else.
  • Use SSH certificates with ssh-keygen.
  • Restrict logins with AllowUsers in sshd_config.
  • Use bastion hosts for added security.

โœ… Conclusion

This cheat sheet covered:

  • Basic SSH connections
  • File transfers (SCP/SFTP)
  • Key management & configs
  • Remote management commands
  • Advanced tunneling & forwarding

SSH remains an indispensable tool for IT professionals and security practitioners.

Wireshark Cheat Sheet

Wireshark is one of the most popular and powerful tools for capturing, analyzing, and troubleshooting network traffic.

Whether you are a network administrator, security professional, or just someone curious about how networks work, learning Wireshark is a valuable skill. This cheat sheet serves as a quick reference for filters, commands, shortcuts, and syntax.


๐Ÿ“Š Default Columns in Packet Capture

NameDescription
No.Frame number from the beginning of the packet capture
TimeSeconds from the first frame
Source (src)Source address (IPv4, IPv6, or Ethernet)
Destination (dst)Destination address
ProtocolProtocol in Ethernet/IP/TCP segment
LengthFrame length in bytes

๐Ÿ”Ž Logical Operators

OperatorDescriptionExample
and / &&Logical ANDAll conditions must match
or / ``Logical ORAt least one condition matches
xor / ^^Logical XOROnly one of two conditions matches
not / !NegationExclude packets
[n] [ ... ]Substring operatorMatch specific text

๐ŸŽฏ Filtering Packets (Display Filters)

OperatorDescriptionExample
eq / ==Equalip.dest == 192.168.1.1
ne / !=Not equalip.dest != 192.168.1.1
gt / >Greater thanframe.len > 10
lt / <Less thanframe.len < 10
ge / >=Greater or equalframe.len >= 10
le / <=Less or equalframe.len <= 10

๐Ÿงฉ Filter Types

NameDescription
Capture filterApplied during capture
Display filterApplied to hide/show after capture

๐Ÿ“ก Capturing Modes

ModeDescription
Promiscuous modeCapture all packets on the segment
Monitor modeCapture all wireless traffic (Linux/Unix only)

โšก Miscellaneous

  • Slice Operator โ†’ [ ... ] (range)
  • Membership Operator โ†’ {} (in)
  • Ctrl+E โ†’ Start/Stop capturing

๐Ÿ” Capture Filter Syntax

Example:

tcp src 192.168.1.1 and tcp dst 202.164.30.1

๐ŸŽจ Display Filter Syntax

Example:

http and ip.dst == 192.168.1.1 and tcp.port

โŒจ๏ธ Keyboard Shortcuts (Main Window)

ShortcutAction
Tab / Shift+TabMove between UI elements
โ†“ / โ†‘Move between packets/details
Ctrl+โ†“ / F8Next packet (even if unfocused)
Ctrl+โ†‘ / F7Previous packet
Ctrl+.Next packet in conversation
Ctrl+,Previous packet in conversation
Return / EnterToggle tree item
BackspaceJump to parent node

๐Ÿ“‘ Protocol Values

ether, fddi, ip, arp, rarp, decnet, lat, sca, moprc, mopdl, tcp, udp

๐Ÿ” Common Filtering Commands

UsageSyntax
Filter by IPip.addr == 10.10.50.1
Destination IPip.dest == 10.10.50.1
Source IPip.src == 10.10.50.1
IP rangeip.addr >= 10.10.50.1 and ip.addr <= 10.10.50.100
Multiple IPsip.addr == 10.10.50.1 and ip.addr == 10.10.50.100
Exclude IP!(ip.addr == 10.10.50.1)
Subnetip.addr == 10.10.50.1/24
Porttcp.port == 25
Destination porttcp.dstport == 23
IP + Portip.addr == 10.10.50.1 and tcp.port == 25
URLhttp.host == "hostname"
Timeframe.time >= "June 02, 2019 18:04:00"
SYN flagtcp.flags.syn == 1 and tcp.flags.ack == 0
Beacon frameswlan.fc.type_subtype == 0x08
Broadcasteth.dst == ff:ff:ff:ff:ff:ff
Multicast(eth.dst[0] & 1)
Hostnameip.host == hostname
MAC addresseth.addr == 00:70:f4:23:18:c4
RST flagtcp.flag.reset == 1

๐Ÿ› ๏ธ Main Toolbar Items

IconItemMenuDescription
โ–ถ๏ธStartCapture โ†’ StartBegin capture
โน๏ธStopCapture โ†’ StopStop capture
๐Ÿ”„RestartCapture โ†’ RestartRestart session
โš™๏ธOptionsCapture โ†’ Optionsโ€ฆCapture options dialog
๐Ÿ“‚OpenFile โ†’ Openโ€ฆLoad capture file
๐Ÿ’พSave AsFile โ†’ Save Asโ€ฆSave capture file
โŒCloseFile โ†’ CloseClose current capture
๐Ÿ”„ReloadView โ†’ ReloadReload capture file
๐Ÿ”Find PacketEdit โ†’ Find Packetโ€ฆSearch packets
โชGo BackGo โ†’ BackJump back in history
โฉGo ForwardGo โ†’ ForwardJump forward
๐Ÿ”Go to PacketGo โ†’ PacketJump to specific packet
โ†ฉ๏ธFirst PacketGo โ†’ First PacketJump to first packet
โ†ช๏ธLast PacketGo โ†’ Last PacketJump to last packet
๐Ÿ“œAuto ScrollView โ†’ Auto ScrollScroll live capture
๐ŸŽจColorizeView โ†’ ColorizeColorize packet list
๐Ÿ”ŽZoom In/OutView โ†’ Zoom In/OutAdjust zoom level
๐Ÿ”ฒNormal SizeView โ†’ Normal SizeReset zoom
๐Ÿ“Resize ColumnsView โ†’ Resize ColumnsFit column width

โœ… Conclusion

Wireshark is an incredibly powerful tool for analyzing and troubleshooting network traffic. This cheat sheet gives you commands, filters, and shortcuts to navigate Wireshark efficiently and quickly.

Google Search โ€” One-page cheat-sheet

A compact, copy-pasteable markdown cheat-sheet with short explanations and ready examples.


Core operators (fast, precise)

  • related: โ€” Find sites similar to a domain. Example: related:clientwebsite.com

  • site: โ€” Search only inside a specific website. Example: burnout at work site:hbr.org

  • intitle:infographic โ€” Pages that call out โ€œinfographicโ€ in the title. Example: gdpr intitle:infographic

  • filetype: โ€” Restrict results to a file format (pdf, docx, ppt). Example: consulting case interview filetype:pdf

  • intitle:2022 โ€” Find pages with a specific year in the title (good for reviews). Example: intitle:2022 laptop for students

  • - (minus) โ€” Exclude words to reduce noise. Example: meta -facebook

  • -site: โ€” Exclude an entire domain. Example: data visualization -site:youtube.com -site:pinterest.com

  • "exact phrase" โ€” Exact-match a full phrase. Example: "that's where google must be down"

  • * (wildcard) โ€” Placeholder for unknown words. Example: "top * programming languages 2024"

  • + โ€” Force inclusion / niche focus. Example: app annie +shopping

  • OR โ€” Return results that match either term. Example: growth marketing OR content marketing OR product marketing


Region & time filters

  • Country TLD with site: โ€” Limit to country-level domains. Example: vaccine site:.us or vaccine site:.fr

  • Date tools (Google โ†’ Tools โ†’ Any time) โ€” Filter by recency (e.g., Past year). Example workflow: Search google tasks tips โ†’ Tools โ†’ select Past year


Image quick tip

  • Transparent backgrounds โ€” Images โ†’ Tools โ†’ Color โ†’ Transparent. Example: company logo โ†’ Tools โ†’ Color โ†’ Transparent

Quick reference (your numbered list mapped to operators)

  1. Exact search: "search"
  2. Site search: site:
  3. Exclude: -search
  4. After date: after:YYYY-MM-DD (useful for single-date filtering)
  5. Range: YYYY-MM-DD..YYYY-MM-DD (or first..second for numbers)
  6. Compare / either/or: (A|B) C or A OR B C
  7. Wildcard: *search (use * inside phrases)
  8. File type: filetype:pdf

Combine operators โ€” practical combos

  • Find recent PDFs from universities:

    site:edu filetype:pdf intitle:2023
    
  • Search product reviews excluding YouTube:

    "laptop review" intitle:2024 -site:youtube.com
    
  • Regional news about vaccines:

    vaccine site:.de after:2024-01-01
    
  • Narrow Q&A on a topic:

    "how to build REST API" site:stackoverflow.com
    

Copy-paste cheat block

related:clientwebsite.com
burnout at work site:hbr.org
gdpr intitle:infographic
consulting case interview filetype:pdf
intitle:2022 laptop for students
meta -facebook
data visualization -site:youtube.com -site:pinterest.com
"that's where google must be down"
"top * programming languages 2024"
app annie +shopping
growth marketing OR content marketing OR product marketing
vaccine site:.us