Home
Welcome to My Learning Journal
Hey, Iโm Akib ๐ โ a Full Stack Developer & DevOps Engineer.
This is my personal collection of notes, guides, and cheat sheets on various topics in technology.
I use it both as a quick reference and as a way to share knowledge with others.
หหหโหหห โ Connect with me:
๐ What You'll Find Here
This site is automatically built and deployed from my
NixOS configuration repository using Nix and mdBook.
- ๐ง Linux โ Installation guides, system tools, and server configs.
- ๐พ Databases โ Notes on MySQL, Postgres, and more.
- ๐ Deployment โ Step-by-step guides on deploying applications.
- ๐ ๏ธ Dev Tools โ Shell scripts, Git, automation tricks, and configs.
โจ Happy learning!
Itโs Sat Jan 24 19:20:31 UTC 2026 โ a great day to document something new.
Docker
Core Concepts
- Image: A lightweight, standalone, executable package. Itโs a blueprint that includes everything needed to run an application: code, runtime, system tools, and libraries.
- Container: A running instance of an image. Itโs the actual, isolated environment where your application runs. You can create, start, stop, and delete multiple containers from a single image.
- Docker Hub: A public registry (like GitHub for code) where you can find, share, and store container images.
- Dockerfile: A text file with instructions for building a Docker image.
Image Management (Build, Pull, List)
Commands for building, downloading, and managing your local images.
| Command | Description |
|---|---|
docker build -t <name:tag> . | Build an image from a Dockerfile in the current directory (.). The -t flag tags it with a human-readable name and tag (e.g., -t my-app:latest). |
docker build --no-cache ... | Build an image without using the cache. Use this to force a fresh build from scratch. |
docker pull <image_name> | Download (pull) an image from a registry like Docker Hub (e.g., docker pull postgres). |
docker images | List all images stored locally on your machine. |
docker rmi <image_name> | Remove (delete) a local image. You may need to stop/remove containers using it first. |
docker search <term> | Search Docker Hub for images matching a search term. |
Container Lifecycle (Run, Stop, Interact)
Commands for creating, running, and managing your containers.
| Command | Description |
|---|---|
docker run <image_name> | Create and start a new container from an image. |
docker run -d <image_name> | Run in detached mode (in the background). The terminal will be freed up. |
docker run --name <my-name> ... | Give your container a custom name (e.g., my-db-container). |
docker run -p 8080:80 ... | Map a port from your local machine (host) to the container. This example maps host port 8080 to container port 80. |
docker run -v /path/on/host:/path/in/container ... | Mount a volume to persist data. This links a host directory to a container directory. |
docker run --rm ... | Automatically remove the container when it stops. Excellent for temporary tasks and cleanup. |
docker run -it <image_name> sh | Run in interactive mode (-it). This opens a shell (sh or bash) inside the new container. |
docker exec -it <container_name> sh | Execute a command (like sh) inside an already running container. |
docker start <container_name> | Start a stopped container. |
docker stop <container_name> | Stop a running container gracefully. |
docker kill <container_name> | Force-stop a running container immediately. |
docker rm <container_name> | Remove a stopped container. |
docker rm -f <container_name> | Force-remove a container (even if itโs running). |
Inspection & Logs
Commands for checking the status, logs, and details of your containers.
| Command | Description |
|---|---|
docker ps | List all running containers. |
docker ps -a | List all containers (running and stopped). |
docker logs <container_name> | Show the logs (console output) of a container. |
docker logs -f <container_name> | Follow the logs in real-time (streams the live output). |
docker inspect <container_name> | Show detailed information (JSON) about a container, including its IP address, port mappings, and volumes. |
docker container stats | Show a live stream of resource usage (CPU, Memory, Network) for all running containers. |
Docker Hub & Registries
Commands for authenticating and sharing your custom images.
| Command | Description |
|---|---|
docker login | Log in to Docker Hub or another container registry. Youโll be prompted for your credentials. |
docker push <username>/<image_name> | Push (upload) your local image to Docker Hub. The image must be tagged with your username first (e.g., docker build -t myuser/my-app .). |
System Cleanup (QOL)
Essential commands for freeing up disk space.
| Command | Description |
|---|---|
docker container prune | Remove all stopped containers. |
docker image prune | Remove dangling images (images that arenโt tagged or used by any container). |
docker image prune -a | Remove all unused images (any image not used by at least one container). |
docker volume prune | Remove all unused volumes (volumes not attached to any container). |
docker system prune | The โbig oneโ: Removes all stopped containers, all dangling images, and all unused networks. |
docker system prune -a --volumes | The โnukeโ: Removes all stopped containers, all unused images (not just dangling), all unused networks, and all unused volumes. |
Docker Compose (Advanced)
The standard tool for defining and running multi-container applications (e.g., a web app, a database, and a cache). It uses a docker-compose.yml file.
| Command | Description |
|---|---|
docker compose up | Build and start all services defined in your docker-compose.yml file. Runs in the foreground. |
docker compose up -d | Build and start all services in detached mode (in the background). |
docker compose down | Stop and remove all containers, networks, and (by default) default volumes defined in the compose file. |
docker compose down -v | Stop and remove everything, including named volumes. |
docker compose ps | List all containers managed by the current compose project. |
docker compose logs | Show logs from all services in the compose project. |
docker compose logs -f <service_name> | Follow the logs in real-time for one or more specific services. |
docker compose exec <service_name> sh | Execute a command (like sh) inside a running serviceโs container. |
docker compose build | Force a rebuild of the images for your services before starting. |
Volumes & Networking (Advanced)
Commands for explicitly managing persistent data and custom networks.
| Command | Description |
|---|---|
docker volume ls | List all volumes on your system. |
docker volume create <volume_name> | Create a new managed volume. |
docker volume inspect <volume_name> | Show detailed information about a volume. |
docker volume rm <volume_name> | Remove one or more volumes. |
docker network ls | List all networks on your system. |
docker network create <network_name> | Create a new custom bridge network. Containers on the same network can communicate by name. |
docker network inspect <network_name> | Show detailed information about a network. |
docker network connect <net> <container> | Connect a running container to an additional network. |
Git
๐ Initial Configuration
Set these up once on any new machine.
git config --global user.name "Your Name"- Sets the name that will appear on your commits.
git config --global user.email "you@example.com"- Sets the email for your commits.
git config --global init.defaultBranch main- Sets the default branch name to
mainfor new repos.
- Sets the default branch name to
git config --global alias.lg "log --graph --oneline --decorate --all"- Creates a
git lgshortcut for a clean, comprehensive log.
- Creates a
git config --global alias.st "status -s"- Creates a
git stshortcut for a short, one-line status.
- Creates a
๐ฆ Basic Workflow: Staging & Committing
This is your day-to-day command cycle.
git init- Initializes a new Git repository in the current directory.
git status- Shows the status of your working directory and staging area (untracked, modified, and staged files).
git add <file...>- Adds one or more specific files to the staging area.
- Example:
git add README.md package.json
git add .- Adds all new and modified files in the current directory to the staging area.
git add -p- Interactively stages parts of files. Git will show you each โhunkโ of changes and ask if you want to stage it (y/n/q).
git commit -m "Your descriptive message"- Saves a permanent snapshot of the staged files to the project history.
git commit -am "Your message"- A shortcut to stage all tracked files and commit them in one step. (Note: Does not add new, untracked files).
git rm <file>- Removes a file from both the working directory and the staging area.
git rm --cached <file>- Removes a file from the staging area (index) but keeps the file in your working directory. Useful for โuntrackingโ a file, like a config file you accidentally added.
git mv <old-name> <new-name>- Renames a file. This is equivalent to
mv <old> <new>,git rm <old>, andgit add <new>.
- Renames a file. This is equivalent to
๐ Inspecting History & Logs
See what has happened in the project.
git log- Shows the full commit history for the current branch.
git log --oneline- Shows a compact, one-line view of the commit history.
git lg(orgit log --graph --oneline --decorate --all)- A powerful, customized log (using the alias from setup) that shows all branches, commit graphs, and tags in a clean one-line format.
git log -p <file>- Shows the commit history for a specific file, including the changes (patches) made in each commit.
git reflog- Shows a log of all movements of
HEAD(commits, checkouts, resets, merges). This is your ultimate safety net for finding โlostโ commits.
- Shows a log of all movements of
๐ฟ Branching & Merging
Manage parallel lines of development.
Branching
git branch- Lists all your local branches.
git branch -a- Lists all local and remote-tracking branches.
git branch <branch-name>- Creates a new branch based on your current
HEAD.
- Creates a new branch based on your current
git checkout <branch-name>- Switches your working directory to the specified branch.
git checkout -b <branch-name>- A shortcut to create a new branch and switch to it immediately.
git branch -m <new-name>- Renames the current branch.
git branch -d <branch-name>- Deletes a merged local branch. Git will stop you if the branch isnโt merged (safety feature).
git branch -D <branch-name>- Force deletes a local branch, even if itโs not merged.
Merging & Rebasing
git merge <branch-name>- Merges the specified branch into your current branch. This creates a new โmerge commitโ if there are new commits on both branches (a non-fast-forward merge).
git rebase <branch-name>- Re-applies your current branchโs commits on top of the specified branch. This creates a cleaner, linear history.
- Example: Youโre on
featureandmainhas updated. Rungit rebase mainto move yourfeaturework to the tip ofmain.
git rebase -i HEAD~3- Interactively rebase the last 3 commits. This opens an editor allowing you to
squash(combine),reword(change message),edit,drop, or reorder commits.
- Interactively rebase the last 3 commits. This opens an editor allowing you to
๐ฅ Stashing
Temporarily save changes you arenโt ready to commit.
git stashorgit stash save "Your message"- Takes all your uncommitted changes (in tracked files), saves them, and cleans your working directory back to
HEAD.
- Takes all your uncommitted changes (in tracked files), saves them, and cleans your working directory back to
git stash list- Shows all stashes youโve saved.
git stash pop- Applies the most recent stash to your working directory and deletes it from the stash list.
git stash apply <stash@{n}>- Applies a specific stash (e.g.,
stash@{1}) but does not delete it from the list.
- Applies a specific stash (e.g.,
git stash drop <stash@{n}>- Deletes a specific stash from the list.
๐ก Remote Repositories (e.g., GitHub)
Manage connections to other repositories.
Managing Remotes
git remote add <name> <url>- Adds a new remote. The standard name is
origin. - Example:
git remote add origin https://github.com/user/repo.git
- Adds a new remote. The standard name is
git remote -v- Lists all your remotes with their URLs.
git remote rename <old-name> <new-name>- Renames a remote.
git remote remove <name>- Removes a remote.
Syncing Changes
git fetch <remote-name>- Downloads all branches and history from the remote without merging them into your local branches. This is safe and lets you inspect changes first.
git pull <remote-name> <branch-name>- A shortcut for
git fetchfollowed bygit merge. It fetches and immediately tries to merge the remote branch into your current local branch. - Example:
git pull origin main
- A shortcut for
git push <remote-name> <branch-name>- Uploads your local branchโs commits to the remote repository.
git push -u <remote-name> <branch-name>- Pushes and sets the remote as the โupstreamโ tracking branch. After this, you can just run
git pullorgit pushfrom that branch.
- Pushes and sets the remote as the โupstreamโ tracking branch. After this, you can just run
git push <remote-name> --delete <branch-name>- Deletes a branch on the remote repository.
git push --force-with-lease- โ ๏ธ Force-pushes your local branch, overwriting the remote. This is safer than
git push --forcebecause it will fail if someone else has pushed new commits in the meantime. Use this only when you have rewritten history (e.g., rebase) and have coordinated with your team.
- โ ๏ธ Force-pushes your local branch, overwriting the remote. This is safer than
โฉ๏ธ Undoing & Rewriting History
How to fix mistakes โafter the fact.โ
Before Committing (Working Directory / Staging)
git restore <file>- Discards changes in your working directory. (The modern, clearer version of
git checkout -- <file>).
- Discards changes in your working directory. (The modern, clearer version of
git restore --staged <file>- Unstages a file, moving it from the staging area back to the working directory. (The modern version of
git reset HEAD <file>).
- Unstages a file, moving it from the staging area back to the working directory. (The modern version of
After Committing (But Before Pushing)
git commit --amend- Lets you change the last commitโs message or add more staged files to it. It replaces the last commit with a new one.
git reset --soft HEAD~1- Un-commits the last commit. The changes from that commit are moved to the staging area.
git reset --mixed HEAD~1(This is the default)- Un-commits the last commit. The changes are moved to the working directory (unstaged).
git reset --hard HEAD~1- โ ๏ธ Destroys the last commit and all changes associated with it. Your working directory is reset to the state of the commit before it. This is permanent.
git reset --hard <commit-hash>- โ ๏ธ Resets your entire project (working directory and index) to a specific commit. Discards all subsequent commits and changes.
After Pushing (Public Commits)
git revert <commit-hash>- The safe way to โundoโ a public commit. This creates a new commit that is the exact inverse of the specified commit. It doesnโt rewrite history.
git revert -m 1 <merge-commit-hash>- Reverts a merge commit.
-m 1tells Git which parent to keep (usually 1).
- Reverts a merge commit.
- Changing a Pushed Commit Message:
- This is highly disruptive to your team. Avoid if possible.
git rebase -i HEAD~5(Go back far enough to find the commit)- Find the commit line, change
picktoreword(orr). - Save and close. Git will prompt you to enter the new message.
git push --force-with-lease
- You must force-push because youโve rewritten public history. All collaborators will need to re-sync their branches.
๐ ๏ธ Advanced Tools
Git Worktrees
Manage multiple branches in separate directories simultaneously.
git clone --bare . /path/to/my-bare-repo.git- Clone the current repository as a bare repository
git worktree add <path> <branch-name>- Checks out a branch into a new directory. This is great for working on a hotfix while keeping your main
featurebranch checked out in your primary folder. - Example:
git worktree add ../my-hotfix-branch hotfix
- Checks out a branch into a new directory. This is great for working on a hotfix while keeping your main
git worktree list- Shows all active worktrees.
git worktree remove <path>- Removes the worktree at the specified path.
Git Submodules
Manage a repository inside another repository.
git submodule add <repo-url> <path>- Adds the other repo as a submodule in the specified path.
git clone --recurse-submodules <repo-url>- Clones a repository and automatically initializes and updates all its submodules.
git submodule update --init --recursive- Run this after a normal
git clone(orgit pull) to initialize or update submodules.
- Run this after a normal
- Workflow for updating a submodule:
cd <submodule-path>git checkout main(or desired branch)git pullcd ..(back to the parent repo)git add <submodule-path>git commit -m "Update submodule to latest"
- This โparentโ commit locks the submodule to the new commit hash you just pulled.
Relational
MySQL
MySQL Usage Guide
DATABASE
Creating, using, and managing databases.
-- Create a new database named myDB
CREATE DATABASE myDB;
-- Switch to the newly created database
USE myDB;
-- Delete the myDB database
DROP DATABASE myDB;
-- Set the myDB database to read-only mode
ALTER DATABASE myDB READ ONLY = 1;
-- Reset the read-only mode of the myDB database
ALTER DATABASE myDB READ ONLY = 0;
TABLES
Creating and modifying tables to organize data.
-- Create an 'employees' table with specified columns
CREATE TABLE employees(
employee_id INT,
first_name VARCHAR(50),
last_name VARCHAR(50),
hourly_pay DECIMAL(5, 2),
hire_date DATE
);
-- Retrieve all data from the 'employees' table
SELECT * FROM employees;
-- Rename the 'employees' table to 'workers'
RENAME TABLE employees TO workers;
-- Delete the 'employees' table
DROP TABLE employees;
Altering Tables
-- Add a new column 'phone_number' to the 'employees' table
ALTER TABLE employees
ADD phone_number VARCHAR(15);
-- Rename the 'phone_number' column to 'email'
ALTER TABLE employees
RENAME COLUMN phone_number TO email;
-- Change the data type of the 'email' column
ALTER TABLE employees
MODIFY COLUMN email VARCHAR(100);
-- Change the position of the 'email' column
ALTER TABLE employees
MODIFY email VARCHAR(100) AFTER last_name;
-- Move the 'email' column to the first position
ALTER TABLE employees
MODIFY email VARCHAR(100) FIRST;
-- Delete the 'email' column
ALTER TABLE employees
DROP COLUMN email;
INSERT ROW
Inserting data into tables.
-- Insert a single row into the 'employees' table
INSERT INTO employees VALUES(1, "Akib", "Ahmed", 25.90, "2024-04-06");
-- Insert multiple rows into the 'employees' table
INSERT INTO employees VALUES
(2, "Sakib", "Ahmed", 20.10, "2024-04-06"),
(3, "Rakib", "Ahmed", 16.40, "2024-04-06"),
(4, "Mula", "Ahmed", 10.90, "2024-04-06"),
(5, "Kodhu", "Ahmed", 19.70, "2024-04-06"),
(6, "Lula", "Ahmed", 23.09, "2024-04-06");
-- Insert specific fields into the 'employees' table
INSERT INTO employees (employee_id, first_name, last_name) VALUES(6, "Munia", "Khatun");
SELECT
Retrieving data from tables.
-- Retrieve all data from the 'employees' table
SELECT * FROM employees;
-- Retrieve specific fields from the 'employees' table
SELECT first_name, last_name FROM employees;
-- Retrieve data from the 'employees' table based on a condition
SELECT * FROM employees WHERE employee_id <= 2;
-- Retrieve data where the 'hire_date' column is NULL
SELECT * FROM employees WHERE hire_date IS NULL;
-- Retrieve data where the 'hire_date' column is not NULL
SELECT * FROM employees WHERE hire_date IS NOT NULL;
UPDATE & DELETE
Modifying and deleting data.
-- Update data in the 'employees' table based on a condition
UPDATE employees
SET hourly_pay = 10.3, hire_date = "2024-01-05"
WHERE employee_id = 7;
-- Update all rows in the 'employees' table for the 'hourly_pay' column
UPDATE employees
SET hourly_pay = 10.3;
-- Delete rows from the 'employees' table where 'hourly_pay' is NULL
DELETE FROM employees
WHERE hourly_pay IS NULL;
-- Delete the 'date_time' column from the 'employees' table
ALTER TABLE employees
DROP COLUMN date_time;
AUTO-COMMIT, COMMIT & ROLLBACK
Managing transactions.
-- Turn off auto-commit mode
SET AUTOCOMMIT = OFF;
-- Manually save changes made in the current transaction
COMMIT;
-- Delete all data from the 'employees' table
DELETE FROM employees;
-- Roll back changes made in the current transaction
ROLLBACK;
DATE & TIME
Working with date and time data.
-- Add a 'join_time' column to the 'employees' table
ALTER TABLE employees
ADD COLUMN join_time TIME;
-- Update the 'join_time' column with the current time
UPDATE employees
SET join_time = CURRENT_TIME();
-- Update the 'hire_date' column based on a condition
UPDATE employees
SET hire_date = CURRENT_DATE() + 1
WHERE hourly_pay >= 20;
-- Add a 'date_time' column to the 'employees' table
ALTER TABLE employees
ADD COLUMN date_time DATETIME;
-- Update the 'date_time' column with the current date and time
UPDATE employees
SET date_time = NOW();
-- Change the name of the 'hire_date' column to 'hire_date'
ALTER TABLE employees
CHANGE COLUMN hire_date hire_date DATE;
CONSTRAINTS
Ensuring data integrity with constraints.
UNIQUE
-- Create a 'products' table with a unique constraint on the 'product_name' column
CREATE TABLE products(
product_id INT,
product_name VARCHAR(50) UNIQUE,
product_price DECIMAL(4,2)
);
-- Add a unique constraint to the 'product_name' column in the 'products' table
ALTER TABLE products
ADD CONSTRAINT UNIQUE(product_name);
-- Insert data into the 'products' table
INSERT INTO products VALUES
(1, "tea", 15.9),
(2, "coffee", 20.89),
(3, "lemon", 10.10);
NOT NULL
-- Create a 'products' table with a NOT NULL constraint on the 'product_price' column
CREATE TABLE products(
product_id INT,
product_name VARCHAR(50) UNIQUE,
product_price DECIMAL(4,2) NOT NULL
);
-- Update the 'product_price' column to be NOT NULL
ALTER TABLE products
MODIFY product_price DECIMAL(4,2) NOT NULL;
-- Insert data into the 'products' table with a NOT NULL column
INSERT INTO products VALUES(4, "mango", 0);
CHECK
-- Create an 'employees' table with a check constraint on the 'hourly_pay' column
CREATE TABLE employees(
employee_id INT,
first_name VARCHAR(50),
last_name VARCHAR(50),
hourly_pay DECIMAL(5, 2),
hire_date DATE,
CONSTRAINT chk_hourly_pay CHECK (hourly_pay >= 10)
);
-- Add a check constraint to the 'hourly_pay' column
ALTER TABLE employees
ADD CONSTRAINT chk_hourly_pay CHECK(hourly_pay >= 10);
-- Insert data into the 'employees' table
INSERT INTO employees VALUES(7, "Kutta", "Mizan", 10.0, CURRENT_DATE(), CURRENT_TIME());
DEFAULT
-- Create a 'products' table with a default value for the 'product_price' column
CREATE TABLE products(
product_id INT,
product_name VARCHAR(50) UNIQUE,
product_price DECIMAL(4,2) DEFAULT 0
);
-- Set the default value for the 'product_price' column
ALTER TABLE products
ALTER product_price SET DEFAULT 0;
-- Insert data into the 'products' table with a default value
INSERT INTO products (product_id, product_name) VALUES(5, "soda");
-- Create a 'transactions' table with a default value for the 'transaction_date' column
CREATE TABLE transactions(
transaction_id INT,
amount DECIMAL(5,2),
transaction_date DATETIME DEFAULT NOW()
);
PRIMARY KEY
-- Create a table for transactions with a primary key
CREATE TABLE transactions(
transaction_id INT PRIMARY KEY,
amount DECIMAL(4,2),
transaction_date DATETIME
);
-- Add a primary key constraint
ALTER TABLE transactions
ADD CONSTRAINT PRIMARY KEY(transaction_id);
-- Set auto-increment for the primary key
ALTER TABLE transactions AUTO_INCREMENT = 1000;
-- Insert data into the transactions table
INSERT INTO transactions(amount) VALUES (54.20);
-- Select all data from the transactions table
SELECT * FROM transactions;
AUTO_INCREMENT
-- Create a table for transactions with an auto-increment primary key
CREATE TABLE transactions(
transaction_id INT PRIMARY KEY AUTO_INCREMENT,
amount DECIMAL(5,2),
transaction_date DATETIME DEFAULT NOW()
);
-- Set the default increment level
ALTER TABLE transactions AUTO_INCREMENT = 1000;
-- Insert data into the transactions table, auto-increment starts from 1000
INSERT INTO transactions(amount) VALUES (45.20), (23.40), (98.00), (43.45);
-- Select all data from the transactions table
SELECT * FROM transactions;
FOREIGN KEY
-- Create a table for customers with a primary key
CREATE TABLE customers(
customer_id INT PRIMARY KEY AUTO_INCREMENT,
first_name VARCHAR(50),
last_name VARCHAR(50)
);
-- Create a table for
transactions with a foreign key constraint
CREATE TABLE transactions(
transaction_id INT PRIMARY KEY AUTO_INCREMENT,
amount DECIMAL(5,2),
transaction_date DATETIME DEFAULT NOW(),
customer_id INT,
FOREIGN KEY(customer_id) REFERENCES customers(customer_id)
);
-- Add a foreign key constraint to the transactions table
ALTER TABLE transactions
ADD CONSTRAINT fk_customer_key
FOREIGN KEY(customer_id) REFERENCES customers(customer_id);
-- Insert data into the transactions table with customer_id
INSERT INTO transactions(amount, customer_id) VALUES (34.34, 1), (123.4, 3), (32.32, 1), (12.00, 2);
JOIN
Combining data from multiple tables.
-- Inner join transactions and customers tables
SELECT *
FROM transactions
INNER JOIN customers
ON transactions.customer_id = customers.customer_id;
-- Select specific fields from joined tables
SELECT transaction_id, transaction_date, first_name, last_name
FROM transactions
INNER JOIN customers
ON transactions.customer_id = customers.customer_id;
-- Left join transactions and customers tables
SELECT *
FROM transactions
LEFT JOIN customers
ON transactions.customer_id = customers.customer_id;
-- Right join transactions and customers tables
SELECT *
FROM transactions
RIGHT JOIN customers
ON transactions.customer_id = customers.customer_id;
FUNCTIONS
Built-in SQL functions.
-- Count the number of transactions
SELECT COUNT(amount) AS "Transaction count" FROM transactions;
-- Find the maximum amount
SELECT MAX(amount) AS max_dollar FROM transactions;
-- Find the minimum amount
SELECT MIN(amount) AS min_dollar FROM transactions;
-- Find the average amount
SELECT AVG(amount) AS avg_dollar FROM transactions;
-- Calculate the total amount
SELECT SUM(amount) AS sum_of_dollar FROM transactions;
-- Concatenate first_name and last_name into a new column
SELECT CONCAT(first_name, " ", last_name) as full_name FROM customers;
AND, OR & NOT
Combining conditions in SQL queries.
-- Add a job column to the employees table
ALTER TABLE employees
ADD COLUMN job VARCHAR(50) AFTER hourly_pay;
-- Update job data based on employee_id
UPDATE employees
SET job = "Programmer"
WHERE employee_id = 1;
-- Select employees with specific conditions
SELECT * FROM employees
WHERE employee_id >= 2 AND employee_id <= 6 AND job = "vendor";
-- Select employees with specific conditions using OR
SELECT * FROM employees
WHERE job = "programmer" OR job = "vendor";
-- Select employees with specific conditions using NOT
SELECT * FROM employees
WHERE NOT job = "programmer" AND NOT job = "vendor";
-- Select employees within a certain hourly pay range
SELECT * FROM employees
WHERE hourly_pay BETWEEN 15 AND 26;
-- Select employees with specific jobs using IN
SELECT * FROM employees
WHERE job IN("programmer", "vendor", "doctor");
WILD-CARDS
Using wildcards for pattern matching.
-- Select employees with first name ending with "hu"
SELECT * FROM employees
WHERE first_name LIKE "%hu";
-- Select employees hired on a specific day (07)
SELECT * FROM employees
WHERE hire_date LIKE "____-__-07";
-- Select employees with job ending with "e" followed by another character
SELECT * FROM employees
WHERE job LIKE "%e_";
ORDER BY
Sorting query results.
-- Select employees ordered by hourly pay in ascending order
SELECT * FROM employees
ORDER BY hourly_pay ASC;
-- Select employees ordered by hire date in descending order
SELECT * FROM employees
ORDER BY hire_date DESC;
-- Select transactions ordered by amount in descending order and customer_id in ascending order
SELECT * FROM transactions
ORDER BY amount DESC, customer_id ASC;
LIMIT
Limiting the number of records returned.
-- Select the first 3 customers
SELECT * FROM customers
LIMIT 3;
-- Select the last 3 customers ordered by customer_id
SELECT * FROM customers
ORDER BY customer_id DESC LIMIT 3;
-- Select 2 customers starting from the 1st position (pagination)
SELECT * FROM customers
LIMIT 0,2;
UNION
Combining results from multiple SELECT statements.
-- Combine unique first and last names from employees and customers
SELECT first_name, last_name FROM employees
UNION
SELECT first_name, last_name FROM customers;
-- Combine all first and last names from employees and customers, including duplicates
SELECT first_name, last_name FROM employees
UNION ALL
SELECT first_name, last_name FROM customers;
SELF JOIN
Joining a table to itself.
-- Add a referral_id column to the customers table
ALTER TABLE customers
ADD COLUMN referral_id INT;
-- Update referral_id for customers
UPDATE customers
SET referral_id = 1
WHERE customer_id = 2;
-- Self join to show referred customers
SELECT a.customer_id, a.first_name, a.last_name,
CONCAT(b.first_name, " ", b.last_name) AS "referred_by"
FROM customers AS a
INNER JOIN customers AS b
ON a.referral_id = b.customer_id;
-- Add a supervisor_id column to the employees table
ALTER TABLE employees
ADD supervisor_id INT;
-- Update supervisor_id for employees
UPDATE employees
SET supervisor_id = 7
WHERE employee_id BETWEEN 2 and 6;
-- Update supervisor_id for a specific employee
UPDATE employees
SET supervisor_id = 1
WHERE employee_id = 7;
-- Self join to show employees and their supervisors
SELECT a.employee_id, a.first_name, a.last_name,
CONCAT(b.first_name, " ", b.last_name) AS "reports to"
FROM employees AS a
INNER JOIN employees AS b
ON a.supervisor_id = b.employee_id;
VIEWS
Creating and using virtual tables.
-- Create a view based on the employees table
CREATE VIEW employee_attendance AS
SELECT first_name, last_name
FROM employees;
-- Retrieve data from the view
SELECT * FROM employee_attendance
ORDER BY last_name ASC;
-- Create a view for customer emails
CREATE VIEW customer_emails AS
SELECT email
FROM customers;
-- Insert data into the customers table and view the changes in the view
INSERT INTO customers
VALUES(6, "Musa", "Rahman", NULL, "musa@mail.com");
SELECT * FROM customers;
SELECT * FROM customer_emails;
INDEX
Improving query performance with indexes.
-- Show indexes for the customers table
SHOW INDEXES FROM customers;
-- Create an index on the last_name column
CREATE INDEX last_name_index
ON customers(last_name);
-- Use the index to speed up search
SELECT * FROM customers
WHERE last_name = "Chan";
-- Create a multi-column index
CREATE INDEX last_name_first_name_idx
ON customers(last_name, first_name);
-- Drop an index
ALTER TABLE customers
DROP INDEX last_name_index;
-- Benefit from the multi-column index during search
SELECT * FROM customers
WHERE last_name = "Chan" AND first_name = "Kuki";
SUB-QUERY
Using sub-queries to nest queries within queries.
-- Get the average hourly pay
SELECT AVG(hourly_pay) FROM employees;
-- Use a sub-query to get the average hourly pay within a larger query
SELECT first_name, last_name, hourly_pay,
(SELECT AVG(hourly_pay) FROM employees) AS avg_hourly_pay
FROM employees;
-- Filter rows based on a sub-query result
SELECT first_name, last_name, hourly_pay
FROM employees
WHERE hourly_pay >= (SELECT AVG(hourly_pay) FROM employees);
-- Use a sub-query with IN to filter customers
SELECT first_name, last_name
FROM customers
WHERE customer_id IN (SELECT DISTINCT customer_id
FROM transactions
WHERE customer_id IS NOT NULL);
-- Use a sub-query with NOT IN to filter customers
SELECT first_name, last_name
FROM customers
WHERE customer_id NOT IN (SELECT DISTINCT customer_id
FROM transactions
WHERE customer_id IS NOT NULL);
GROUP BY
Aggregating data with grouping.
-- Sum amounts grouped by transaction date
SELECT SUM(amount), transaction_date
FROM transactions
GROUP BY transaction_date;
-- Get the maximum amount per customer
SELECT MAX(amount), customer_id
FROM transactions
GROUP BY customer_id;
-- Count transactions per customer having more than one transaction
SELECT COUNT(amount), customer_id
FROM transactions
GROUP BY customer_id
HAVING COUNT(amount) > 1 AND customer_id IS NOT NULL;
ROLL-UP
Extending group by with roll-up for super-aggregate values.
-- Sum amounts with a roll-up
SELECT SUM(amount), transaction_date
FROM transactions
GROUP BY transaction_date WITH ROLLUP;
-- Count transactions with a roll-up
SELECT COUNT(transaction_id) AS "# of orders", customer_id
FROM transactions
GROUP BY customer_id WITH ROLLUP;
-- Sum hourly pay with a roll-up
SELECT SUM(hourly_pay) AS "hourly pay", employee_id
FROM employees
GROUP BY employee_id WITH ROLLUP;
ON-DELETE
Handling foreign key deletions.
-- Delete a customer record
DELETE FROM customers
WHERE customer_id = 3;
-- Disable foreign key checks and delete a customer
SET foreign_key_checks = 0;
DELETE FROM customers
WHERE customer_id = 3;
SET foreign_key_checks = 1;
-- Insert a customer record
INSERT INTO customers
VALUES(3, "Shilpi", "Akter", 3, "shilpy@mail.com");
-- Create a table with ON DELETE SET NULL
CREATE TABLE transactions(
transaction_id INT PRIMARY KEY,
amount DECIMAL(5,
3),
customer_id INT,
order_date DATE,
FOREIGN KEY(customer_id) REFERENCES customers(customer_id)
ON DELETE SET NULL
);
-- Update an existing table with ON DELETE SET NULL
ALTER TABLE transactions
DROP FOREIGN KEY fk_customer_key;
ALTER TABLE transactions
ADD CONSTRAINT fk_customer_key
FOREIGN KEY(customer_id) REFERENCES customers(customer_id)
ON DELETE SET NULL;
-- Create or alter a table with ON DELETE CASCADE
ALTER TABLE transactions
ADD CONSTRAINT fk_transaction_id
FOREIGN KEY(customer_id) REFERENCES customers(customer_id)
ON DELETE CASCADE;
STORED PROCEDURE
Creating reusable SQL code blocks.
-- Create a procedure
DELIMITER $$
CREATE PROCEDURE get_customers()
BEGIN
SELECT * FROM customers;
END $$
DELIMITER ;
-- Delete a procedure
DROP PROCEDURE get_customers;
-- Create a procedure with an argument
DELIMITER $$
CREATE PROCEDURE find_customer(IN id INT)
BEGIN
SELECT * FROM customers WHERE customer_id = id;
END $$
DELIMITER ;
-- Create a procedure with multiple arguments
DELIMITER $$
CREATE PROCEDURE find_customer(IN f_name VARCHAR(50), IN l_name VARCHAR(50))
BEGIN
SELECT * FROM customers WHERE first_name = f_name AND last_name = l_name;
END $$
DELIMITER ;
-- Call a procedure
CALL find_customer("Akib", "Ahmed");
TRIGGERS
Automatically performing actions in response to events.
-- Add a salary column to the employees table
ALTER TABLE employees
ADD COLUMN salary DECIMAL(10,2) AFTER hourly_pay;
-- Calculate salary based on hourly pay
UPDATE employees
SET salary = hourly_pay * 2080;
-- Create a trigger to update salary before updating hourly pay
CREATE TRIGGER before_hourly_pay_update
BEFORE UPDATE ON employees
FOR EACH ROW
SET NEW.salary = (NEW.hourly_pay * 2080);
-- Update hourly pay and see the trigger in action
UPDATE employees
SET hourly_pay = 50
WHERE employee_id = 1;
-- Create a trigger to update salary before inserting a new employee
CREATE TRIGGER before_hourly_pay_insert
BEFORE INSERT ON employees
FOR EACH ROW
SET NEW.salary = (NEW.hourly_pay * 2080);
-- Insert a new employee and see the trigger in action
INSERT INTO employees
VALUES(6, "Shel", "Plankton", 10, NULL, "Janitor", "2024-06-17", "09:22:23", 7);
-- Create a table for expenses
CREATE TABLE expenses(
expense_id INT PRIMARY KEY,
expense_name VARCHAR(50),
expense_total DECIMAL(10,2)
);
-- Insert initial data into the expenses table
INSERT INTO expenses
VALUES (1, "salaries", 0), (2, "supplies", 0), (3, "taxes", 0);
-- Update expenses based on salaries
UPDATE expenses
SET expense_total = (SELECT SUM(salary) FROM employees)
WHERE expense_name = "salaries";
-- Create a trigger to update expenses after deleting an employee
CREATE TRIGGER after_salary_delete
AFTER DELETE ON employees
FOR EACH ROW
UPDATE expenses
SET expense_total = expense_total - OLD.salary
WHERE expense_name = "salaries";
-- Delete an employee and see the trigger in action
DELETE FROM employees
WHERE employee_id = 6;
-- Create a trigger to update expenses after inserting a new employee
CREATE TRIGGER after_salary_insert
AFTER INSERT ON employees
FOR EACH ROW
UPDATE expenses
SET expense_total = expense_total + NEW.salary
WHERE expense_name = "salaries";
-- Insert a new employee and see the trigger in action
INSERT INTO employees
VALUES(6, "Shel", "Plankton", 10, NULL, "Janitor", "2024-06-17", "09:22:23", 7);
-- Create a trigger to update expenses after updating an employee's salary
CREATE TRIGGER after_salary_update
AFTER UPDATE ON employees
FOR EACH ROW
UPDATE expenses
SET expense_total = expense_total + (NEW.salary - OLD.salary)
WHERE expense_name = "salaries";
-- Update an employee's hourly pay and see the trigger in action
UPDATE employees
SET hourly_pay = 100
WHERE employee_id = 1;
PostgreSQL
PostgreSQL Quick Guide
A concise guide to common PostgreSQL commands, syntax, and concepts.
Key Differences from MySQL:
- Strings: Use single quotes only (e.g.,
'Hello World'). - Identifiers: (Table/column names) are case-insensitive unless you wrap them in double quotes (e.g.,
"myColumn"). - Switching DBs: There is no
USE db_name;command. In thepsqlterminal, use the\c db_namemeta-command. - Auto-Increment: Use the
SERIALorGENERATED AS IDENTITYkeyword. - Concatenation: The standard SQL
||operator is preferred (e.g.,first_name || ' ' || last_name).
psql Command Line Basics
psql is the interactive terminal for PostgreSQL.
Connecting:
# Connect to a specific database as a specific user
psql -d myDB -U myUser -h localhost
Common Meta-Commands (start with \):
\l: List all databases.\c db_name: Connect to a different database.\dt: List all tables in the current database.\d table_name: Describe a table (columns, indexes, constraints).\dn: List all schemas.\df: List all functions.\du: List all users (roles).\timing: Toggles query execution time display.\e: Open the last query in your text editor.\q: Quitpsql.
Database & Role Management
Manage databases and user permissions.
-- Create a new database
CREATE DATABASE myDB;
-- Delete a database
DROP DATABASE myDB;
-- Create a new user (role) with login permission
CREATE ROLE myUser WITH LOGIN PASSWORD 'my_password';
-- Grant privileges for a user on a table
GRANT ALL ON employees TO myUser;
-- Grant privileges to connect to a database
GRANT CONNECT ON DATABASE myDB TO myUser;
Tables & Data Types
Create, modify, and delete tables.
-- Create a table with common data types
CREATE TABLE employees (
employee_id SERIAL PRIMARY KEY, -- Auto-incrementing primary key
first_name VARCHAR(50) NOT NULL,
hourly_pay NUMERIC(5, 2) DEFAULT 10.00, -- Equivalent to DECIMAL
hire_date DATE DEFAULT CURRENT_DATE,
created_at TIMESTAMP DEFAULT NOW()
);
-- Modify an existing table
ALTER TABLE employees
ADD COLUMN email VARCHAR(100) UNIQUE,
RENAME COLUMN hire_date TO joined_date,
ALTER COLUMN hourly_pay TYPE NUMERIC(6, 2),
DROP COLUMN some_old_column;
-- Rename a table
ALTER TABLE employees RENAME TO workers;
-- Delete a table
DROP TABLE employees;
Note: PostgreSQL does not support reordering columns (like AFTER or FIRST). You must recreate the table.
Constraints
Rules to ensure data integrity, best defined at creation.
CREATE TABLE products (
product_id SERIAL PRIMARY KEY,
product_name VARCHAR(50) UNIQUE NOT NULL,
price NUMERIC(6, 2) DEFAULT 0,
category_id INT,
-- Check constraint
CONSTRAINT chk_price CHECK (price >= 0),
-- Foreign key constraint with actions
CONSTRAINT fk_category
FOREIGN KEY(category_id)
REFERENCES categories(category_id)
ON DELETE SET NULL -- or ON DELETE CASCADE
);
-- Add a constraint to an existing table
ALTER TABLE employees
ADD CONSTRAINT chk_hourly_pay CHECK(hourly_pay >= 10.00);
Manipulating Data (CRUD)
The four basic data operations: Create, Read, Update, Delete.
-- CREATE (Insert)
-- Insert a single row (best practice to name columns)
INSERT INTO employees (first_name, last_name, hourly_pay)
VALUES ('Akib', 'Ahmed', 25.90);
-- Insert multiple rows
INSERT INTO employees (first_name, last_name, hourly_pay) VALUES
('Sakib', 'Ahmed', 20.10),
('Rakib', 'Ahmed', 16.40);
-- READ (Select)
SELECT * FROM employees;
-- UPDATE (Update)
UPDATE employees
SET hourly_pay = 27.50, email = 'akib@mail.com'
WHERE employee_id = 1;
-- DELETE (Delete)
DELETE FROM employees
WHERE employee_id = 1;
Transactions
Ensure that a group of SQL statements either all succeed or all fail together.
-- Start a transaction block
BEGIN;
-- Make changes
UPDATE employees SET hourly_pay = 99.00 WHERE employee_id = 2;
DELETE FROM employees WHERE employee_id = 3;
-- To undo the changes in this block
ROLLBACK;
-- To make the changes permanent
COMMIT;
Querying: Filtering & Sorting
Use SELECT to retrieve data with complex conditions.
SELECT
first_name || ' ' || last_name AS full_name,
hourly_pay,
joined_date
FROM employees
WHERE
(hourly_pay > 20 OR job IS NULL)
AND first_name ILIKE 'a%' -- ILIKE is case-insensitive LIKE
ORDER BY
joined_date DESC,
first_name ASC
LIMIT 10 OFFSET 5; -- Skip 5 rows, fetch the next 10 (for pagination)
Querying: Aggregation
Summarize data using aggregate functions and GROUP BY.
SELECT
job,
COUNT(employee_id) AS "employee_count",
AVG(hourly_pay) AS avg_pay,
SUM(hourly_pay) AS total_payroll
FROM employees
WHERE joined_date > '2023-01-01'
GROUP BY job
HAVING COUNT(employee_id) > 2 -- Filter groups, not rows
ORDER BY avg_pay DESC;
-- Use ROLLUP to get a grand total row
SELECT job, SUM(hourly_pay)
FROM employees
GROUP BY ROLLUP(job); -- Will add a final row with the total sum
Querying: Joins
Combine rows from two or more tables.
-- INNER JOIN: Returns only matching rows from both tables
SELECT e.first_name, t.amount
FROM employees AS e
INNER JOIN transactions AS t ON e.employee_id = t.employee_id;
-- LEFT JOIN: Returns all rows from the left (employees) table,
-- and matching rows from the right (transactions) table.
SELECT e.first_name, t.amount
FROM employees AS e
LEFT JOIN transactions AS t ON e.employee_id = t.employee_id;
-- SELF JOIN: Join a table to itself
SELECT a.first_name AS employee, b.first_name AS supervisor
FROM employees AS a
LEFT JOIN employees AS b ON a.supervisor_id = b.employee_id;
Querying: Combining
Combine the results of multiple SELECT statements.
-- UNION: Combines results and removes duplicates
SELECT first_name, last_name FROM employees
UNION
SELECT first_name, last_name FROM customers;
-- UNION ALL: Combines results and keeps all duplicates
SELECT first_name, last_name FROM employees
UNION ALL
SELECT first_name, last_name FROM customers;
-- Sub-query: Use a query result as a condition
SELECT * FROM employees
WHERE hourly_pay > (SELECT AVG(hourly_pay) FROM employees);
-- Common Table Expression (CTE): A temporary, named result set
WITH highest_payers AS (
SELECT * FROM employees WHERE hourly_pay > 50
)
SELECT * FROM highest_payers WHERE joined_date < '2024-01-01';
Database Objects
Reusable SQL components.
Views
A virtual table based on a SELECT query.
-- Create a read-only view
CREATE VIEW v_high_earners AS
SELECT employee_id, first_name, hourly_pay
FROM employees
WHERE hourly_pay > 30;
-- Query the view like a table
SELECT * FROM v_high_earners;
Indexes
Speed up data retrieval on frequently queried columns.
-- Create an index
CREATE INDEX idx_employees_last_name
ON employees(last_name);
-- Create a multi-column index
CREATE INDEX idx_employees_name
ON employees(last_name, first_name);
-- Drop an index
DROP INDEX idx_employees_last_name;
Stored Functions
Reusable blocks of code. In Postgres, these are typically functions that return a value or a table.
-- Create a function in the plpgsql language
CREATE OR REPLACE FUNCTION find_employee_by_id(id INT)
RETURNS SETOF employees AS $$
BEGIN
RETURN QUERY
SELECT * FROM employees WHERE employee_id = id;
END;
$$ LANGUAGE plpgsql;
-- Call the function
SELECT * FROM find_employee_by_id(1);
Triggers
A function that automatically runs when an event (INSERT, UPDATE, DELETE) occurs on a table.
This is a two-step process:
-- 1. Create the trigger FUNCTION
CREATE OR REPLACE FUNCTION log_last_update()
RETURNS TRIGGER AS $$
BEGIN
-- NEW refers to the row being inserted or updated
NEW.updated_at = NOW();
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- 2. Bind the function to a table with a TRIGGER
CREATE TRIGGER trg_employees_update
BEFORE UPDATE ON employees
FOR EACH ROW
EXECUTE FUNCTION log_last_update();
Advanced Features
JSONB
Store and query JSON data efficiently.
CREATE TABLE products (id SERIAL, data JSONB);
INSERT INTO products (data)
VALUES ('{"name": "Coffee", "tags": ["hot", "drink"]}');
-- Query a JSON key (->> returns as text)
SELECT * FROM products WHERE data->>'name' = 'Coffee';
-- Check if a JSON array contains a value
SELECT * FROM products WHERE data @> '{"tags": ["hot"]}';
Window Functions
Perform aggregate calculations over a โwindowโ of rows without collapsing them.
-- Get each employee's salary AND the average salary for their job
SELECT
first_name,
job,
hourly_pay,
AVG(hourly_pay) OVER (PARTITION BY job) AS avg_job_pay,
RANK() OVER (ORDER BY hourly_pay DESC) AS pay_rank
FROM employees;
Web_App
Linux Server Setup & MERN App Deployment
These are the steps to setup an Ubuntu server from scratch and deploy a MERN app with the PM2 process manager and Nginx. We are using Linode, but you could just as well use a different cloud provider or your own machine or VM.
Create an account at Linode
Click on Create Linode
Choose your server options (OS, region, etc)
SSH Keys
You will see on the setup page an area to add an SSH key.
There are a few ways that you can log into your server. You can use passwords, however, if you want to be more secure, I would suggest setting up SSH keys and then disabling passwords. That way you can only log in to your server from a PC that has the correct keys setup.
I am going to show you how to setup authentication with SSH, but if you want to just use a password, you can skip most of this stuff.
You need to generate an SSH key on your local machine to login to your server remotely. Open your terminal and type
ssh-keygen
By default, it will create your public and private key files in the .ssh directory on your local machine and name them id_rsa and id_rsa.pub. You can change this if you want, just make sure when it asks, you put the entire path to the key as well as the filename. I am using id_rsa_linode
Once you do that, you need to copy the public key. You can use the cat command and then copy the key
cat ~/.ssh/id_rsa_linode.pub
Copy the key. It will look something like this:
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEwMkP0KHX19q2dM/9pB9dpB2B/FwdeP4egXCgdEOraJuqGvaylKgbu7XDFinP6ByqJQg/w8vRV0CsFXrnr+Lh51fKv8ZPvV/yRIMjxKzNn/0+asatkjrkOwT3f3ipbzfS0bsqfWTHivZ7UNMrOHaaSezxvJpPGbW3aoTCFSA/sUUUSiWZ65v7I/tFkXE0XH+kSDFbLUDDNS1EzofWZFRcdSFbC3zrGsQHN3jcit6ba7bACQYixxFCgVB0mZO9SOgFHC64PEnZh5hJ8h8AqIjf5hDF9qFdz2jFEe/4aQmKQAD3xAPKTXDLLngV/2yFF0iWpnJ9MZ/mJoLVzhY2pfkKgnt/SUe/Hn1+jhX4wrz7wTDV4xAe35pmnajFjDppJApty+JOzKf3ifr4lNeZ5A99t9Pu0294BhYxm7/mKXiWPsevX9oSZxSJmQUtqWWz/KBVoVjlTRgAgLYbKCNBzmw7+qdRxoxxscQCQrCpJMlat56vxK8cjqiESvduUu78HHE= trave@ASUS
Now paste that into the Linode.com textarea and name it (eg.My PC)
At some point, you will be asked to enter a root password for your server as well.
Connecting as Root
Finish the setup and then you will be taken to your dashboard. The status will probably say Provisioning. Wait until it says Running and then open your local machineโs terminal and connect as root. Of course you want to use your own serverโs IP address
ssh root@69.164.222.31
At this point, passwords are enabled, so you will be asked for your root password.
If you authenticate and login, you should see a welcome message and your prompt should now say root@localhost:~#. This is your remote server
I usually suggest updating and upgrading your packages
sudo apt update
sudo apt upgrade
Create a new user
Right now you are logged in as root and it is a good idea to create a new account. Using the root account can be a security risk.
You can check your current user with the command:
whoami
It will say root right noe.
Letโs add a new user. I am going to call my user brad
adduser brad
Just hit enter through all the questions. You will be asked for a use password as well.
You can use the following command to see the user info including the groups it belongs to
id brad
Now, letโs add this user to the โsudoโ group, which will give them root privileges.
usermod -aG sudo brad
Now if you run the following command, you should see sudo
id brad
Add SSH keys for new account
If you are using SSH, you will want to setup SSH keys for the new account. We do this by adding it to a file called authorized_keys in the users directory.
Go to the new users home directory
cd /home/brad
Create a .ssh directory and go into it
mkdir .ssh
cd .ssh
Create a new file called authorized_keys
touch authorized_keys
Now you want to put your public key in that file. You can open it with a simpl text editor called nano
sudo nano authorized_keys
Now you can paste your key in here. Just repeat the step above where we ran cat and then the location of your public key. IMPORTANT: Make sure you open a new terminal for this that is not logged into your server.
Now paste the key in the file and hi ctrl or cmd+X then hit Y to save and hit enter again
Disabling passwords
This is an extra security step. Like I said earlier, we can disable passwords so that only your local machine with the correct SSH keys can login.
Open the following file on your server
sudo nano /etc/ssh/sshd_config
Look for where it says
PasswordAuthentication Yes
Remove the # if there is one and change the Yes to No
If you want to disable root login all together you could change permitRootLogin to no as well. Be sure to remove the # sign becayse that comments the line out.
Save the file by exiting (ctrl+x) and hit Y to save.
Now you need to reset the sshd service
sudo systemctl restart sshd
Now you can logout by just typing logout
Try logging back in with your user (Use your username and serverโs IP)
ssh brad@69.164.222.31
If you get a message that says โPublick key deniedโ or something like that, run the following commands:
eval `ssh-agent -s`
ssh-add ~/.ssh/id_rsa_linode # replace this with whatever you called your key file
try logging in again and you should see the welcome message and not have to type in any password.
Node.js setup
Now that we have provisioned our server and we have a user setup with SSH keys, itโs time to start setting up our app environment. Letโs start by installing Node.js
We can install Node.js with curl using the following commands
curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
sudo apt-get install -y nodejs
# Check to see if node was installed
node --version
npm --version
Get files on the server
We want to get our application files onto the server. We will use Git for this. I am using the goal setter app from my MERN stack series on YouTube
On your SERVER, go to where you want the app to live and clone the repo you want to deply from GitHub (or where ever else)
Here is the repo I will be using. Feel free to deploy the same app: https://github.com/bradtraversy/mern-tutorial
mkdir sites
cd sites
git clone https://github.com/bradtraversy/mern-tutorial.git
Now I should have a folder called mern-tutorial with all of my files and folders.
App setup
There are a few things that we need to do including setting up the .ENV file, installing dependencies and building our static assets for React.
.env file
With this particular application, I create a .envexample file because I did not want to push the actual .env file to GitHub. So you need to first rename that .envexample:
mv .envexample .env
# To check
ls -a
Now we need to edit that file
sudo nano .env
Change the NODE_ENV to โproductionโ and change the MONGO_URI to your own. You can create a mongodb Atlas database here
Exit with saving.
Dependencies & Build
We need to install the server dependencies. This should be run from the root of the mern-tutorial folder. NOT the backend folder.
npm install
Install frontend deps:
cd frontend
npm install
We need to build our static assets as well. Do this from the frontend folder
npm run build
Run the app
Now we should be able to run the app like we do on our local machine. Go into the root and run
npm start
If you go to your ip and port 5000, you should see your app. In my case, I would go to
http://69.164.222.31:5000
Even though we see our app running, we are not done. We donโt want to leave a terminal open with npm start. We also donโt want to have to go to port 5000. So letโs fix that.
Stop the app from running with ctrl+C
PM2 Setup
PM2 is a production process manager fro Node.js. It allows us to keep Node apps running without having to have terminal open with npm start, etc like we do for development.
Letโs first install PM2 globally with NPM
sudo npm install -g pm2
Run with PM2
pm2 start backend/server.js # or whatever your entry file is
Now if you go back to your server IP and port 5000, you will see it running. You could even close your terminal and it will still be running
There are other pm2 commands for various tasks as well that are pretty self explanatory:
- pm2 show app
- pm2 status
- pm2 restart app
- pm2 stop app
- pm2 logs (Show log stream)
- pm2 flush (Clear logs)
Firewall Setup
Obviously we donโt want users to have to enter a port of 5000 or anything else. We are going to solve that by using a server called NGINX. Before we set that up, lets setup a firewall so that people can not directly access any port except ports for ssh, http and https
The firewall we are using is called UFW. Letโs enable it.
sudo ufw enable
You will notice now if you go to the site using :5000, it will not work. That is because we setup a firewall to block all ports.
You can check the status of the firewall with
sudo ufw status
Now letโs open the ports that we need which are 22, 80 and 443
sudo ufw allow ssh (Port 22)
sudo ufw allow http (Port 80)
sudo ufw allow https (Port 443)
Setup NGINX
Now we need to install NGINX to serve our app on port 80, which is the http port
sudo apt install nginx
If you visit your IP address with no port number, you will see a Welcome to nginx! page.
Now we need to configure a proxy for our MERN app.
Open the following config file
sudo nano /etc/nginx/sites-available/default
Find the location / area and replace with this
location / {
proxy_pass http://localhost:5000; # or which other port your app runs on
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
Above that, you can also put the domain that you plan on using:
server_name yourdomain.com www.yourdomain.com;
Save and close the file
You can check your nginx configuration with the following command
sudo nginx -t
Now restart the NGINX service:
sudo service nginx restart
Now you should see your app when you go to your IP address in the browser.
Domain Name
You probably donโt want to use your IP address to access your app in the browser. So letโs go over setting your domain with a Linode.
You need to register your domain. It doesnโt matter who you use for a registrar. I use Namecheap, but you could use Godaddy, Google Domains or anyone else.
You need to change the nameservers with your Domain registrar. The process can vary depending on who you use. With Namecheap, the option is right on the details page.
You want to add the following nameservers:
- ns1.linode.com
- ns2.linode.com
- ns3.linode.com
- ns4.linode.com
- ns5.linode.com
Technically this could take up to 48 hours, but it almost never takes that long. In my own experience, it is usually 30 - 90 minutes.
Set your domain in Linode
Go to your dashboard and select Domains and then Create Domain
Add in your domain name and link to the Linode with your app, then submit the form.
Now you will see some info like SOA Record, NS Record, MX Record, etc. There are A records already added that link to your IP address, so you donโt have to worry about that. If you wanted to add a subdomain, you could create an A record here for that.
Like I said, it may take a few hours, but you should be all set. You have now deployed your application.
if you want to make changes to your app, just push to github and run a git pull on your server. There are other tools to help automate your deployments, but I will go over that another time.
Set Up SSL
You can purchase an SSL and set it with your domain registrar or you can use Letโs Encrypt and set one up for free using the following commands:
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install python-certbot-nginx
sudo certbot --nginx -d yourdomain.com -d www.yourdomain.com
# Only valid for 90 days, test the renewal process with
certbot renew --dry-run
Comprehensive Guide: Docker, NGINX, and Production Node.js Deployment
This document provides a detailed, two-part guide: first, on setting up a basic NGINX web server using Docker for serving static files, and second, on deploying a production-ready Node.js application using NGINX as a reverse proxy with SSL security.
Part 1: Setting Up NGINX to Serve Static Files
This section focuses on containerization, basic package management, and NGINX configuration to serve a simple HTML and CSS website.
1. Docker Environment Setup
Docker is a platform used to develop, ship, and run applications in isolated environments called containers. Weโll use an Ubuntu container as our lightweight server environment.
Pulling and Running the Container
We use the docker run command to create and start the container, mapping a port on your host machine to the containerโs internal web server port.
| Command | Purpose |
|---|---|
docker pull ubuntu | Fetches the latest Ubuntu OS image from Docker Hub, the default public registry. |
docker run -it -p 9090:80 ubuntu | Runs a new container from the ubuntu image. |
-it | Allocates an interactive terminal (i) and keeps STDIN open (t), allowing you to interact with the containerโs shell. |
-p 9090:80 | Port mapping: Forwards traffic from the host machineโs port 9090 to the containerโs internal port 80 (where NGINX will listen). |
2. Installing Packages and Starting NGINX
Once inside the containerโs shell, we install the necessary tools.
Installation Commands
# Update package lists and upgrade installed packages
apt update && apt upgrade
# Install NGINX (the web server) and Neovim (a powerful text editor)
apt install nginx neovim
Verifying and Starting the Web Server
| Command | Purpose |
|---|---|
nginx -v | Verification: Confirms NGINX installed correctly and displays the version. |
nginx | Execution: Starts the NGINX web server process. By default, it listens for HTTP traffic on port 80 within the container. |
โ ๏ธ Common Mistake: Missing the
-itflag If you omit the-itwhen running the container, the container will immediately exit because it has no foreground process to run. Solution: Usedocker run -it ...or usedocker start [container_id]anddocker attach [container_id]if itโs already created.
3. NGINX Configuration for Static Files
The primary NGINX configuration file is located at /etc/nginx/nginx.conf. We will modify this file to serve our websiteโs static content.
Configuration Workflow
- Navigate:
cd /etc/nginx - Backup:
mv nginx.conf nginx.backup(Preserves the default configuration) - Create/Edit:
nvim nginx.conf - Reload:
nginx -s reload(Applies the new configuration without stopping the server)
Creating the Static Content
We must create the website files before referencing them in the NGINX configuration.
# Create a root directory for the website inside /etc/nginx
mkdir MyWebSite
# Create the essential files
touch MyWebSite/index.html
touch MyWebSite/style.css
Sample Website Files
MyWebSite/index.html
<html>
<head>
<title>Ahmed X Nginx</title>
<link rel="stylesheet" href="style.css" />
</head>
<body>
<h1>Hello From NGINX</h1>
<p>This is a simple NGINX WebPage</p>
</body>
</html>
MyWebSite/style.css
body {
background-color: black;
color: white;
}
Final NGINX Static File Configuration (nginx.conf)
This configuration tells NGINX where to find the files and how to handle file types.
events {
# The events block handles how NGINX manages connections (e.g., number of worker processes).
}
http {
# The http block contains server configurations.
# ๐ MIME Types: Defines file extensions and their corresponding content types (crucial for browsers)
types {
text/css css;
text/html html;
}
server {
# The server block defines a virtual host.
listen 80; # Listen for HTTP requests on the container's port 80.
server_name _; # Wildcard: Matches requests for any domain name.
# ๐ฏ Root Directive: Defines the base directory for file lookups.
root /etc/nginx/MyWebSite;
# When a request comes in (e.g., http://host:9090/), NGINX will look for
# index.html inside the directory defined by the 'root' directive.
}
}
Testing: After reloading NGINX (
nginx -s reload), you should be able to access the website by pointing your host machineโs browser tohttp://localhost:9090.
Part 2: Production Deployment of a Node.js Application
This section details using NGINX as a reverse proxy to deploy a Node.js application, including process management, firewall setup, and SSL encryption.
4. Application and Infrastructure Setup
In a production environment, we deploy the Node.js application on a high, non-standard port (e.g., 5173) and use NGINX to handle the public-facing traffic on the standard port (80/443).
Installing Required Tools
The installation command includes all necessary components for a robust deployment.
apt install git nvim nginx tmux nodejs ufw python3-certbot-nginx
| Tool | Purpose |
|---|---|
nodejs | The runtime environment for the application. |
git | For cloning the project source code. |
tmux | A terminal multiplexer for managing multiple sessions (useful for running background tasks). |
ufw | The Uncomplicated Firewall, used to secure the server. |
python3-certbot-nginx | The tool for obtaining and configuring SSL/TLS certificates from Letโs Encrypt. |
Cloning and Installing the Project
# Clone the repository containing the Node.js project
git clone https://github.com/akibahmed229/Java-Employee_Management-System-Website.git
# Navigate into the project folder
cd Java-Employee_Management-System-Website
# Install dependencies defined in package.json
npm install
Process Management and Firewall
We use PM2 (Process Manager 2) to ensure the Node.js application runs continuously and automatically restarts if it crashes.
-
Install PM2 globally:
sudo npm i pm2 -g -
Start the application:
pm2 start index.js --name "myapp" # Note: Using 'index.js' is more explicit than 'index' -
Enable Firewall:
# Enables the firewall (WARNING: This will block all unapproved traffic) ufw enable # Explicitly allow inbound HTTP traffic on standard web port 80 sudo ufw allow 'Nginx HTTP' # Or sudo ufw allow 80
5. Configuring NGINX as a Reverse Proxy
A reverse proxy sits in front of the application server, accepting client requests and forwarding them to the application. This setup centralizes security (SSL), load balancing, and static file serving, leaving the Node.js app to focus purely on business logic.
Reverse Proxy Configuration (/etc/nginx/nginx.conf)
events { }
http {
server {
# Listen for connections on standard HTTP port 80 (IPv4 and IPv6)
listen 80 default_server;
listen [::]:80 default_server;
server_name yourdomain.com www.yourdomain.com; # IMPORTANT: Replace with your actual domain
location / {
# ๐ฏ The core reverse proxy directive: Forward requests to the Node.js app running locally on port 5173
proxy_pass http://localhost:5173;
# ๐ค Essential Proxy Headers for correct communication
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade; # Required for WebSockets
proxy_set_header Connection 'upgrade'; # Required for WebSockets
proxy_set_header Host $host; # Passes the original domain name to the backend app
proxy_cache_bypass $http_upgrade; # Ensures WebSocket requests bypass any proxy cache
}
# NOTE: The original 'root /var/www/html;' and 'index ...' directives are typically
# removed or placed in a separate location block when using a reverse proxy for the root location.
}
}
QOL Enhancement: Using Multiple Config Files In production, itโs better practice to create a dedicated configuration file for your site in
/etc/nginx/sites-available/yourdomain.confand create a symbolic link to/etc/nginx/sites-enabled/. This avoids cluttering the mainnginx.confand makes managing multiple sites easier.
6. Securing with SSL/TLS (HTTPS)
SSL/TLS (Secure Sockets Layer/Transport Layer Security) encrypts communication between the userโs browser and the server, creating HTTPS. We use Certbot with the Letโs Encrypt service to automate this process.
Installing the Certificate
The certbot command automatically edits the NGINX configuration to redirect HTTP (port 80) traffic to HTTPS (port 443) and adds the necessary certificate files.
# This command automatically obtains a certificate for your domain and configures NGINX
certbot --nginx -d yourdomain.com -d www.yourdomain.com
Testing Automated Renewal
Letโs Encrypt certificates are only valid for 90 days, so automated renewal is essential.
# Performs a dry run to test the renewal process without actually renewing
certbot renew --dry-run
Best Practice: Port Management After enabling SSL, confirm your firewall (
ufw) is allowing HTTPS traffic on port 443:sudo ufw allow 'Nginx Full'.
Advanced Techniques: Optimizing a Dockerized NGINX/Node.js Stack
This section explores advanced concepts for performance, security, and maintainability in your deployed environment.
7. Performance and Hardening with NGINX
A. Caching Static Assets
NGINX can significantly improve page load times by caching static files like images, CSS, and JavaScript.
Advanced Configuration Snippet:
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
# Match common static file extensions
expires 30d; # Tell the client's browser to cache these files for 30 days
add_header Pragma "public";
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
# Ensure NGINX serves these files directly (important when using a root directive)
root /path/to/static/assets;
# โ ๏ธ Use a separate location block for caching, not the main proxy_pass block
}
B. Rate Limiting for Security
Rate limiting prevents abuse and Denial of Service (DoS) attacks by restricting the number of requests a single client can make over a period of time.
# 1. Define the limit zone in the http block
# 'mylimit' is the zone name, 1m is the size (1MB), and 5r/s is 5 requests per second
limit_req_zone $binary_remote_addr zone=mylimit:1m rate=5r/s;
server {
# 2. Apply the limit in the server or location block
location /login/ {
# Burst allows a short burst of requests above the limit before throttling.
limit_req zone=mylimit burst=10 nodelay;
proxy_pass http://localhost:5173;
}
}
8. Docker Best Practices and Automation
A. Using Multi-Stage Builds
When creating a production Docker image for a Node.js application, using a multi-stage build dramatically reduces the final image size by discarding build-time dependencies.
Conceptual Dockerfile Snippet:
# Stage 1: Build Stage (Uses a heavy image for building)
FROM node:20-slim as builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build # Assuming a build script exists
# Stage 2: Production Stage (Uses a tiny image for running)
FROM node:20-slim
WORKDIR /app
# Only copy the essential files from the builder stage
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/index.js ./ # Or whatever your entry file is
CMD [ "pm2-runtime", "start", "index.js" ]
B. PM2 Ecosystem File
Instead of managing startup via the command line, use a PM2 Ecosystem file (ecosystem.config.js) to standardize configuration, logging, and environment variables.
Example:
module.exports = {
apps: [
{
name: "node-app-prod",
script: "./index.js",
instances: "max", // Run on all available CPU cores
exec_mode: "cluster",
env: {
NODE_ENV: "production",
PORT: 5173,
},
},
],
};
Start command: pm2 start ecosystem.config.js
9. Troubleshooting and Diagnostics
A. Checking NGINX Configuration Errors
Before reloading NGINX, always check the configuration syntax to prevent downtime.
nginx -t
# Output should be: "syntax is ok" and "test is successful"
B. Diagnosing PM2/Node.js Issues
If your application isnโt responding through NGINX, check the logs and status of your PM2 process.
| Command | Purpose |
|---|---|
pm2 status | Shows the current running status, uptime, and process ID. |
pm2 logs myapp | Streams the applicationโs standard output and error logs in real-time. |
pm2 monit | Opens a real-time terminal dashboard to monitor CPU, Memory, and logs. |
Distubation
NixOS
NixOS Command Cheatsheet
A collection of useful Nix and NixOS commands for system management.
System & Store Maintenance
-
Verify & Repair Store: Checks the integrity of the Nix store and repairs any issues. Use this if you suspect corruption.
sudo nix-store --repair --verify --check-contents -
Garbage Collection: Removes all unused packages from the Nix store to free up space.
sudo nix-collect-garbage -d sudo nix-collect-garbage --delete-older-than 7d sudo nix store gc
Generation Management
-
List System Generations: Shows all past system configurations (generations).
sudo nix-env --list-generations --profile /nix/var/nix/profiles/system -
Switch Generation (No Reboot): Allows you to roll back to a previous system configuration without restarting.
-
List generations:
nix-env --list-generations -p /nix/var/nix/profiles/system -
Switch to generation:
sudo nix-env --switch-generation <number> -p /nix/var/nix/profiles/system -
Activate configuration:
sudo /nix/var/nix/profiles/system/bin/switch-to-configuration switch -
Set Booted Generation as Default: If you boot into an older generation, run this to make it the default.
/run/current-system/bin/switch-to-configuration boot
-
System Rebuilding
- Rebuild without Cache: Forces a rebuild without using cached tarballs.
sudo nixos-rebuild switch --flake .#host --option tarball-ttl 0 - Rebuild on a Remote Machine: Uses
sudoon a remote machine during activation.nixos-rebuild --use-remote-sudo switch --flake .#host
Flake Management
-
Update Flake Inputs: Updates flake dependencies and commits to
flake.lock.nix flake update --commit-lock-file --accept-flake-config -
Update Flake Inputs: Provide the git auth token to Updates flake dependencies and commits to
flake.lock. (fix api rate limit)nix flake update --option access-tokens "github.com=$(gh auth token)" -
Inspect Flake Metadata: Shows flake metadata in JSON format.
nix flake metadata --json | nix run nixpkgs#jq
Development & Packaging
-
Prefetch URL: Downloads a file and prints its hash. Essential for packaging.
nix-prefetch-url "https://discord.com/api/download?platform=linux&format=tar.gz" -
Evaluate a Nix File: Tests a Nix expression from a file.
nix-eval --file default.nix
Nixpkgs Legacy: Using Old OpenSSH with DSS
Sometimes you need to connect to legacy SSH servers that only support ssh-dss (DSA) keys. Modern Nixpkgs disables DSS by default, but you can pin an older package.
1. Create a Nix file for legacy OpenSSH
legacy-ssh.nix:
{ pkgs ? import <nixpkgs> {} }:
let
# Pin an older nixpkgs commit with DSS support
legacyPkgs = import (builtins.fetchTarball {
url = "https://github.com/NixOS/nixpkgs/archive/2f6ef9aa6a7eecea9ff7e185ca40855f36597327.tar.gz";
sha256 = "0jcs9r4q57xgnbrc76davqy10b1xph15qlkvyw1y0vk5xw5vmxfz";
}) {};
in
legacyPkgs.openssh
Browse older package versions: Nix Versions
2. Build the package
nix build -f legacy-ssh.nix
3. Use the legacy ssh binary
./result/bin/ssh -F /dev/null \
-o HostKeyAlgorithms=ssh-dss \
-o KexAlgorithms=diffie-hellman-group1-sha1 \
-o PreferredAuthentications=password,keyboard-interactive \
admin@192.168.0.1 -vvv
Explanation of key options:
-F /dev/nullโ Ignore default SSH config.HostKeyAlgorithms=ssh-dssโ Allow DSS host keys.KexAlgorithms=diffie-hellman-group1-sha1โ Use legacy key exchange.PreferredAuthentications=password,keyboard-interactiveโ Only use password or interactive login.
NixOS with LUKS, LVM, and Btrfs: A Comprehensive Guide
๐งญ Table of Contents
- ๐ฟ NixOS Installation: Manual Partitioning with LUKS + LVM + Btrfs
- โ Extending an Encrypted LVM Volume with a New Disk
- โ Removing an Encrypted Disk from an LVM Volume
- ๐ LUKS Command Reference
- ๐ LVM Command Reference
- ๐ Btrfs Command Reference
- ๐ง System Recovery: Chrooting with a Live USB
1. ๐ฟ NixOS Installation: Manual Partitioning with LUKS + LVM + Btrfs
This section guides you through a fresh installation of NixOS on a single disk.
Prerequisites
- Boot the NixOS installer.
- Connect to the internet.
- Switch to a
rootshell:sudo -i. - Identify your target disk:
lsblk. - Set a variable for your device:
export DEVICE=/dev/sda
Step 1. Wipe Disk and Create Partition Table
๐จ Warning: This will destroy all data on the specified disk.
vgchange -a n root_vg # deactivate the volume group
sgdisk --zap-all ${DEVICE}
# optional, override your disk with random data, take too long, hours.
dd if=/dev/urandom of=/dev/nvme01n bs=4096 status=progress
Step 2. Create Partitions
We will create a standard 4-partition layout for a modern UEFI system.
# Partition 1: 1M BIOS Boot partition (for GRUB compatibility)
sgdisk --new=1:0:+1M --typecode=1:EF02 --change-name=1:boot ${DEVICE}
# Partition 2: 500M EFI System Partition (ESP)
sgdisk --new=2:0:+500M --typecode=2:EF00 --change-name=2:ESP ${DEVICE}
# Partition 3: 4G Swap partition
sgdisk --new=3:0:+4G --typecode=3:8200 --change-name=3:swap ${DEVICE}
# Partition 4: The rest of the disk for our encrypted data
# We use 8E00 which is the typecode for "Linux LVM"
sgdisk --new=4:0:0 --typecode=4:8E00 --change-name=4:root ${DEVICE}
Step 3. Format Unencrypted Filesystems
Format the ESP and swap partitions, giving them labels for easy mounting.
# Format the EFI partition
mkfs.vfat -n ESP ${DEVICE}p2
# Set up the swap partition
mkswap -L swap ${DEVICE}p3
Step 4. Set Up LUKS Encryption and LVM ๐
This is the core of the setup. We create an encrypted container on our main partition and then build an LVM structure inside it.
# 1. Create the LUKS encrypted container on the fourth partition.
# You will be prompted to enter and confirm a strong passphrase. Remember this!
echo "Formatting the LUKS container. Please enter your encryption passphrase."
cryptsetup luksFormat -v -s 512 -h sha512 --label crypted ${DEVICE}p4
# 2. Open the LUKS container to make it accessible.
# This creates a decrypted "virtual" device at /dev/mapper/crypted.
echo "Opening the LUKS container. Please enter your passphrase."
cryptsetup open ${DEVICE}p4 crypted
# 3. Set up LVM *inside* the decrypted container.
# Initialize the physical volume (PV) on the decrypted device
pvcreate /dev/mapper/crypted
# Create the volume group (VG) named "root_vg"
vgcreate root_vg /dev/mapper/crypted
# Create the logical volume (LV) named "root" that uses all available space
lvcreate -l 100%FREE -n root root_vg
Step 5. Format the LVM Volume with Btrfs
Now, we format the LVM logical volume (not the physical partition) with Btrfs.
mkfs.btrfs -L root /dev/root_vg/root
Step 6. Create and Mount Btrfs Subvolumes
We use Btrfs subvolumes to separate parts of our system, which is standard practice for NixOS.
# 1. Mount the top-level Btrfs volume
mount /dev/root_vg/root /mnt
# 2. Create the subvolumes
btrfs subvolume create /mnt/root
btrfs subvolume create /mnt/persist
btrfs subvolume create /mnt/nix
# 3. Unmount the top-level volume
umount /mnt
# 4. Mount the root subvolume with correct options
mount -o subvol=root,compress=zstd,noatime /dev/root_vg/root /mnt
# 5. Create the directories for the other mountpoints
mkdir -p /mnt/persist
mkdir -p /mnt/nix
mkdir -p /mnt/boot
# 6. Mount the other subvolumes
mount -o subvol=persist,noatime,compress=zstd /dev/root_vg/root /mnt/persist
mount -o subvol=nix,noatime,compress=zstd /dev/root_vg/root /mnt/nix
Step 7. Mount Boot Partition and Activate Swap
Finish by mounting the ESP and activating the swap.
# Mount the boot partition
mount ${DEVICE}p2 /mnt/boot
# Activate the swap partition
swapon ${DEVICE}p3
Step 8. Generate NixOS Configuration
Finally, generate the NixOS configuration. The installer will automatically detect the LUKS and LVM setup.
nixos-generate-config --root /mnt
Your /mnt/etc/nixos/hardware-configuration.nix will be auto-generated with the correct LUKS and filesystem entries, similar to this:
# Example /etc/nixos/hardware-configuration.nix
{ config, lib, pkgs, modulesPath, ... }:
{
imports =
[ (modulesPath + "/profiles/qemu-guest.nix")
];
boot.initrd.availableKernelModules = [ "ahci" "xhci_pci" "virtio_pci" "sr_mod" "virtio_blk" ];
boot.initrd.kernelModules = [ "dm-snapshot" ];
boot.kernelModules = [ "kvm-intel" ];
boot.extraModulePackages = [ ];
# This part is automatically added to unlock your disk at boot
boot.initrd.luks = {
devices."crypted" = {
device = "/dev/disk/by-label/crypted";
preLVM = true;
};
};
# These are your Btrfs subvolumes
fileSystems."/" =
{ device = "/dev/mapper/root_vg-root";
fsType = "btrfs";
options = [ "subvol=root" ];
};
fileSystems."/persist" =
{ device = "/dev/mapper/root_vg-root";
fsType = "btrfs";
options = [ "subvol=persist" ];
};
fileSystems."/nix" =
{ device = "/dev/mapper/root_vg-root";
fsType = "btrfs";
options = [ "subvol=nix" ];
};
# Your boot and swap partitions
fileSystems."/boot" = {
device = "/dev/disk/by-label/ESP";
fsType = "vfat";
options = [ "fmask=0022" "dmask=0022" ];
};
swapDevices =[ { device = "/dev/disk/by-label/swap"; } ];
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
}
You can now proceed with editing your configuration.nix and running nixos-install.
2. โ Extending an Encrypted LVM Volume with a New Disk
Use this guide when youโve added a new physical disk (e.g., /dev/vdb) and want to add its encrypted space to your existing root_vg.
๐จ Pre-flight Check: Backup
Before you begin, ensure you have a backup of any critical data.
Step 1. Partition and Label the New Disk
Weโll create a single partition on the new disk (/dev/vdb) and give it a partition label for easy identification.
# Open parted for /dev/vdb
sudo parted /dev/vdb
# Inside parted, run the following commands:
# (parted)
mklabel gpt
mkpart primary 0% 100%
name 1 crypted_ext
quit
This creates /dev/vdb1 and sets its partition name (label) to crypted_ext.
Step 2. Create and Open the LUKS Encrypted Container
Now, encrypt the new partition.
# Encrypt /dev/vdb1.
# -> IMPORTANT: Use the EXACT SAME password as your main encryption.
# -> This allows NixOS to unlock both with a single password prompt.
sudo cryptsetup luksFormat /dev/vdb1
# Open the new LUKS container so we can work with it.
sudo cryptsetup luksOpen /dev/vdb1 crypted_ext_mapper
The unlocked device is now available at /dev/mapper/crypted_ext_mapper.
Step 3. Integrate the New Encrypted Disk into LVM
Add the newly decrypted device as a Physical Volume (PV) to your existing Volume Group (VG).
# 1. Create a new Physical Volume (PV) on the unlocked container.
sudo pvcreate /dev/mapper/crypted_ext_mapper
# 2. Extend your existing 'root_vg' Volume Group with this new PV.
sudo vgextend root_vg /dev/mapper/crypted_ext_mapper
# 3. (Verification) Check your Volume Group. It should now be larger.
sudo vgs
Step 4. Extend the Logical Volume and Btrfs Filesystem
Make the new space available to your filesystem.
# 1. Extend the Logical Volume to use 100% of the new free space.
sudo lvextend -l +100%FREE /dev/mapper/root_vg-root
# 2. Resize the Btrfs filesystem to fill the newly expanded Logical Volume.
sudo btrfs filesystem resize max /
# 3. (Verification) Check your disk space.
df -h /
Step 5. Update configuration.nix
This is the most critical step. You must tell NixOS to unlock this second device at boot.
Edit your /etc/nixos/configuration.nix file and add the new device to boot.initrd.luks.
# Your configuration.nix
boot.initrd.luks = {
devices."crypted" = {
device = "/dev/disk/by-label/crypted"; # This is your original /dev/vda4
preLVM = true;
};
# --- ADD THIS NEW BLOCK ---
devices."crypted_ext" = {
# Use the partition label you set in Step 1
device = "/dev/disk/by-partlabel/crypted_ext";
preLVM = true;
allowDiscards = true; # Good practice for SSDs/VMs
};
};
Note on Passwords: Because you used the same password for both LUKS devices, NixOS will ask for your password only once at boot and use it to unlock both containers.
Step 6. Rebuild and Reboot
DO NOT REBOOT until you have applied the new configuration.
# Apply your new NixOS configuration
sudo nixos-rebuild switch
# Now it is safe to reboot
sudo reboot
3. โ Removing an Encrypted Disk from an LVM Volume
This guide covers the complex process of removing a disk (e.g., /dev/vdb) from a Volume Group when your filesystem spans multiple disks.
๐จ WARNING: This is a high-risk operation. A mistake can lead to total data loss. Back up all critical data before proceeding. This process almost always requires booting from a Live Linux ISO because you cannot shrink a mounted root filesystem.
The Goal
Our goal is to move all data off /dev/vdb (which is part of root_vg) onto your other disk (/dev/mapper/crypted) and then remove /dev/vdb from the LVM setup.
The Problem
You cannot pvmove data off /dev/vdb because there is no free space on the other disk to move it to. You must first shrink your filesystem and logical volume to be smaller than the size of the disk you want to keep.
Example:
- Disk 1 (
/dev/mapper/crypted): 35G - Disk 2 (
/dev/vdb): 20G - Total
root_vgsize: 55G - Your Goal: You must shrink your Btrfs filesystem and LV to < 35G (e.g., 34G).
Step 1. Boot from a Live Linux ISO
- Attach a NixOS, Ubuntu, or other Linux ISO to your VM or machine and boot from it.
- Open a terminal.
Step 2. Unlock Encrypted Disks and Activate LVM
# 1. Unlock your *main* encrypted partition (the one you are keeping)
# Replace /dev/vda4 with your actual partition
sudo cryptsetup luksOpen /dev/vda4 crypted
# 2. Unlock the *second* disk's encrypted partition
# (This assumes /dev/vdb is encrypted, following the guide in section 2)
sudo cryptsetup luksOpen /dev/vdb1 crypted_ext
# 3. Activate the LVM Volume Group
sudo vgchange -ay
Step 3. Resize Btrfs and LV (Offline)
This is the most critical part.
# 1. Run a filesystem check (highly recommended)
sudo btrfs check /dev/mapper/root_vg-root
# 2. Shrink the Btrfs filesystem.
# We set it to 34G, which is smaller than our 35G target disk.
sudo btrfs filesystem resize 34G /dev/mapper/root_vg-root
# 3. Shrink the Logical Volume to match.
sudo lvreduce -L 34G /dev/mapper/root_vg-root
Step 4. Reboot into Your Normal System
The offline part is done.
sudo reboot
Remove the Live ISO and boot back into your NixOS. Your system will boot up on a smaller filesystem.
Step 5. Migrate Data and Remove the Disk (Online)
Now that you are back in your system, sudo vgs should show free space in root_vg.
# 1. Load the 'dm-mirror' module, which pvmove needs
sudo modprobe dm_mirror
# 2. Move all data extents off the disk you want to remove.
# This will move data from crypted_ext to the free space on crypted.
sudo pvmove -v /dev/mapper/crypted_ext_mapper
# 3. Remove the now-empty Physical Volume from the Volume Group.
sudo vgreduce root_vg /dev/mapper/crypted_ext_mapper
# 4. Remove the LVM metadata from the device.
sudo pvremove /dev/mapper/crypted_ext_mapper
Step 6. Update configuration.nix and Clean Up
- Edit your
/etc/nixos/configuration.nixand remove the entry forcrypted_extfromboot.initrd.luks. - Rebuild your system:
sudo nixos-rebuild switch - You can now safely close the LUKS container and reboot. The disk
/dev/vdbis completely free.sudo cryptsetup luksClose crypted_ext_mapper sudo reboot
4. ๐ LUKS Command Reference
Common cryptsetup commands for managing LUKS devices.
-
Format a new LUKS container:
# --label is recommended for use in /dev/disk/by-label/ cryptsetup luksFormat --label crypted /dev/sda4 -
Open (decrypt) a container:
# This creates a device at /dev/mapper/my_decrypted_volume cryptsetup luksOpen /dev/sda4 my_decrypted_volume -
Close (lock) a container:
cryptsetup luksClose my_decrypted_volume -
Add a new password (key slot):
# You will be prompted for an *existing* password first. cryptsetup luksAddKey /dev/sda4 -
Remove a password:
# You will be prompted for the password you wish to remove. cryptsetup luksRemoveKey /dev/sda4 -
View header information (and key slots):
cryptsetup luksDump /dev/sda4 -
Resize an online LUKS container: (Useful if you resize the underlying partition).
cryptsetup resize my_decrypted_volume
5. ๐ LVM Command Reference
Common commands for managing LVM.
Physical Volume (PV) - The Disks
-
Initialize a disk for LVM:
pvcreate /dev/mapper/crypted -
List physical volumes:
pvs pvdisplay -
Move data from one PV to another (within the same VG):
# Moves all data *off* /dev/sdb1 pvmove /dev/sdb1 # Moves data from /dev/sdb1 *to* /dev/sdc1 pvmove /dev/sdb1 /dev/sdc1 -
Remove LVM metadata from a disk:
# Only run this *after* removing the PV from its VG. pvremove /dev/sdb1
Volume Group (VG) - The Pool of Disks
- Create a new VG:
# Creates a VG named "my_vg" using two disks vgcreate my_vg /dev/sdb1 /dev/sdc1 - List volume groups:
vgs vgdisplay - Add a disk (PV) to an existing VG:
vgextend my_vg /dev/sdd1 - Remove a disk (PV) from a VG:
# The PV must be empty (use pvmove first). vgreduce my_vg /dev/sdb1 - Remove a VG:
# Make sure all LVs are removed first. vgremove my_vg
Logical Volume (LV) - The โPartitionsโ
-
Create a new LV:
# Create a 50G LV named "my_lv" from the "my_vg" pool lvcreate -L 50G -n my_lv my_vg # Create an LV using all remaining free space lvcreate -l 100%FREE -n my_other_lv my_vg -
List logical volumes:
lvs lvdisplay -
Extend an LV (and its filesystem):
# Extend the LV to be 100G in total lvresize -L 100G /dev/my_vg/my_lv # Add 20G to the LV's current size lvresize -L +20G /dev/my_vg/my_lv # Extend the LV to use all free space in the VG lvextend -l +100%FREE /dev/my_vg/my_lv # --- IMPORTANT --- # After extending, you must resize the filesystem inside it. # For ext4: resize2fs /dev/my_vg/my_lv # For btrfs: btrfs filesystem resize max /path/to/mountpoint -
Reduce an LV (and its filesystem): ๐จ DANGEROUS! You must shrink the filesystem first.
# 1. Shrink the filesystem (e.g., ext4, UNMOUNTED) resize2fs /dev/my_vg/my_lv 40G # 2. Shrink the LV to match lvreduce -L 40G /dev/my_vg/my_lv # For Btrfs, you can often do it online: # 1. Shrink Btrfs btrfs filesystem resize 40G /path/to/mountpoint # 2. Shrink LV lvreduce -L 40G /dev/my_vg/my_lv -
Remove an LV:
# Make sure it's unmounted first. lvremove /dev/my_vg/my_lv
6. ๐ Btrfs Command Reference
Common commands for managing Btrfs filesystems and subvolumes.
-
Format a device:
# -L sets the label mkfs.btrfs -L root /dev/my_vg/my_lv -
Resize a filesystem:
# Grow to fill the maximum available space (after an lvextend) btrfs filesystem resize max /path/to/mountpoint # Set to a specific size (e.g., 50G) btrfs filesystem resize 50G /path/to/mountpoint # Shrink by 10G btrfs filesystem resize -10G /path/to/mountpoint -
Show filesystem usage:
# Btrfs-aware 'df' btrfs filesystem df /path/to/mountpoint -
Create a subvolume:
# Mount the top-level (ID 5) volume first mount /dev/my_vg/my_lv /mnt # Create subvolumes btrfs subvolume create /mnt/root btrfs subvolume create /mnt/nix umount /mnt -
List subvolumes:
btrfs subvolume list /path/to/mountpoint -
Delete a subvolume:
# Deleting a subvolume is recursive and instant btrfs subvolume delete /mnt/nix -
Create a snapshot:
# Create a read-only snapshot of 'root' btrfs subvolume snapshot -r /mnt/root /mnt/root-snapshot # Create a writable snapshot (a clone) btrfs subvolume snapshot /mnt/root /mnt/root-clone -
Check a Btrfs filesystem (unmounted):
btrfs check /dev/my_vg/my_lv
7. ๐ง System Recovery: Chrooting with a Live USB
If your system fails to boot due to a broken configuration, a kernel panic, or a faulty GRUB, you can use a Live USB (like the NixOS installer) to chroot into your installation and fix it. The nixos-enter command is a powerful script that makes this much easier.
Prerequisites
- Boot from a NixOS installer ISO.
- Connect to the internet (if you need to download packages).
- Open a terminal and get a root shell:
sudo -i.
Step 1. Identify and Unlock LUKS Volumes
First, find your encrypted partitions.
lsblk
You will need to identify all partitions that are part of your LVM root_vg. In the setup from this guide, there are two: the main crypted partition (e.g., /dev/vda4) and the extended one crypted_ext (e.g., /dev/vdb1).
๐จ Important: You must unlock ALL LUKS volumes that are part of your Volume Group, otherwise LVM will fail to activate.
# Unlock the primary disk (e.g., /dev/vda4)
cryptsetup luksOpen /dev/vda4 crypted
# Unlock the extended disk (e.g., /dev/vdb1)
cryptsetup luksOpen /dev/vdb1 crypted_ext
Enter your single passphrase when prompted for each.
Step 2. Activate the LVM Volume Group
Tell LVM to scan for and activate the Volume Groups now available on the decrypted devices.
# Scan for and activate all volume groups
vgchange -ay
You should see a message that root_vg is now active.
Step 3. Mount Filesystems for nixos-enter
nixos-enter is smart, but it needs the root (/) and boot (/boot) partitions mounted at /mnt.
# 1. Mount the Btrfs root subvolume
# This is the subvolume you set for '/' in your configuration.nix
mount -o subvol=root /dev/mapper/root_vg-root /mnt
# 2. Mount the boot (ESP) partition
# This is VITAL for fixing GRUB. Find your ESP (e.g., /dev/vda2)
mkdir -p /mnt/boot
mount /dev/vda2 /mnt/boot
Step 4. Chroot into Your System
With the root and boot partitions mounted, you can now use nixos-enter. It will automatically find your /nix store and other subvolumes.
nixos-enter
Your prompt should change, and you are now โinsideโ your broken NixOS installation as the root user.
Step 5. Perform Repairs (Inside the Chroot)
Here are common fixes for a broken system.
Scenario 1: Fix a Broken configuration.nix
This is the most common fix. You made a change, rebuilt, and now it wonโt boot.
# 1. Edit your configuration to fix the typo or bad option
nano /etc/nixos/configuration.nix
# 2. Rebuild the system.
# 'nixos-rebuild switch' will build and make it the default.
nixos-rebuild switch
# If you are less confident, 'nixos-rebuild boot' will build it
# and set it as the default, but won't activate it immediately.
nixos-rebuild boot
Scenario 2: Roll Back to a Previous Generation
If you just want to undo your last build, you can roll back.
# This will build your *previous* configuration and make it the default.
nixos-rebuild boot --rollback
# You can also list all generations and switch to a specific one:
nix-env -p /nix/var/nix/profiles/system --list-generations
nix-env -p /nix/var/nix/profiles/system --switch-generation 123
Scenario 3: Manually Reinstall GRUB
If nixos-rebuild doesnโt fix a โno bootable deviceโ error, GRUB itself might be broken.
# This command reinstalls GRUB to your EFI directory.
grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=nixos
After this, itโs still a good idea to run nixos-rebuild switch to ensure GRUBโs configuration file is also correct.
Step 6. Exit and Reboot
Once you are finished, exit the chroot and unmount everything.
# 1. Exit the chroot
exit
# 2. Unmount all partitions
umount -R /mnt
# 3. Reboot the system
reboot
Remove your Live USB, and your system should now boot into the fixed configuration.
Installation
Arch Linux Installation Guide
This guide provides step-by-step instructions for installing Arch Linux.
Table of Contents
- Keyboard Layout Setup
- Connecting to Wi-Fi
- SSH Connection to Another Device
- Date and Time Setup
- Disk Management for Installation
- System Installation
- Configuring the New Installation (arch-chroot)
- Edit The Mkinitcpio File For Encrypt
- Grub Installation
- Enabling Systemd Services
- Creating a New User
- Finishing the Installation
- Post-Installation Configuration
1. Keyboard Layout Setup
Load the keyboard layout using the following commands:
localectl
localectl list-keymaps
localectl list-keymaps | grep us
loadkeys us
Explanation:
localectl: Lists the current keyboard layout settings.localectl list-keymaps: Lists all available keyboard layouts.localectl list-keymaps | grep us: Filters the list to show only layouts containing โusโ (United States layout).loadkeys us: Sets the keyboard layout to US.
2. Connecting to Wi-Fi
Connect to a Wi-Fi network using the following commands:
iwctl
device list
station wlan0 get-networks
station wlan0 connect wifiname
ip a
ping -c 5 google.com
Explanation:
iwctl: Launches the interactive Wi-Fi control utility.device list: Lists available network devices.station wlan0 get-networks: Scans for available Wi-Fi networks.station wlan0 connect wifiname: Connects to the specified Wi-Fi network (replace โwifinameโ with the actual network name).ip a: Displays the network interfaces and their IP addresses.ping -c 5 google.com: Pings the Google website to test the internet connection.
3. SSH Connection to Another Device
Set a password and establish an SSH connection to another device:
passwd
ssh root@ipaddress
Explanation:
passwd: Sets the password for the current device (root user).ssh root@ipaddress: Connects to the current device using SSH from another device (replace โipaddressโ with the actual IP address of the current device).
4. Date and Time Setup
Set the date and time for the system:
timedatectl
timedatectl list-timezones
timedatectl list-timezones | grep Dhaka
timedatectl set-timezone Asia/Dhaka
timedatectl
Explanation:
timedatectl: Displays the current system time and date settings.timedatectl list-timezones: Lists all available time zones.timedatectl list-timezones | grep Dhaka: Filters the list to
show time zones containing โDhakaโ (replace with your desired time zone).
timedatectl set-timezone Asia/Dhaka: Sets the systemโs time zone to โAsia/Dhakaโ (replace with your desired time zone).timedatectl: Verifies the updated time and date settings.
5. Disk Management for Installation
Manage the disk partitions for the installation:
lsblk
ls /sys/firmware/efi/efivars
blkid /dev/vda
cfdisk
lsblk
mkfs.btrfs -f /dev/vda1
mkfs.fat -F32 /dev/vda2
blkid /dev/vda
mount /dev/vda1 /mnt
cd /mnt
btrfs subvolume create @
btrfs subvolume create @home
cd
umount /mnt
mount -o noatime,ssd,space_cache=v2,compress=zstd,discard=async,subvol=@ /dev/vda1 /mnt
mkdir /mnt/home
mount -o noatime,ssd,space_cache=v2,compress=zstd,discard=async,subvol=@home /dev/vda1 /mnt/home
mkdir -p /mnt/boot/efi
mount /dev/vda2 /mnt/boot/efi
mkdir /mnt/windows
lsblk
Explanation:
lsblk: Lists available block devices and their partitions.ls /sys/firmware/efi/efivars: Verifies if the system is booted in UEFI mode.blkid /dev/sda: Displays information about the /dev/sda drive (replace with the appropriate drive if different).cfdisk: # create two pertion 1. Main file 2. efi partion
disk encryption
-
cryptsetup luksformat /dev/vda1: setup encryption -
cryptsetup luksOpen /dev/vda1 main: open your encrypted partition -
lsblk: Lists the updated block devices and their partitions after partitioning. -
mkfs.btrfs -f /dev/mapper/main: Formats the System partition (/dev/vda1 or (main)) as Btrfs. -
mkfs.fat -F32 -f /dev/vda2: Formats the EFI System partition (/dev/vda2) as FAT32. -
blkid /dev/vda: Verifies the UUID of the formatted partition. -
mount /dev/mapper/main /mnt: Mounts the System partition (main) to the /mnt directory. -
cd /mnt: Changes the current directory to /mnt. -
fat subvolume create @: Creates a Btrfs subvolume named โ@โ for the root directory. -
fat subvolume create @home: Creates a Btrfs subvolume named โ@homeโ for the home directory. -
cd: Returns to the previous directory. -
umount /mnt: Unmounts the /mnt directory. -
mount -o noatime,ssd,space_cache=v2,compress=zstd,discard=async,subvol=@ /dev/vda1 /mnt: Mounts the System partition (/dev/vda1) with Btrfs subvolume โ@โ, applying specified mount options. -
mkdir /mnt/home: Creates the /mnt/home directory. -
mount -o noatime,ssd,space_cache=v2,compress=zstd,discard=async,subvol=@home /dev/vda1 /mnt/home: Mounts the System partition (/dev/vda1) with Btrfs subvolume โ@homeโ to the /mnt/home directory, applying specified mount options. -
mkdir -p /mnt/boot/efi: Creates the /mnt/boot/efi directory. -
mount /dev/vda1 /mnt/boot/efi: Mounts the EFI System partition (/dev/vda2) to the /mnt/boot/efi directory.
(Optional) For Windows partition:
mkdir /mnt/windows: Creates the /mnt/windows directory.lsblk: Lists available block devices and their partitions to identify the Windows partition.
6. System Installation
Install the base system:
reflector --country Bangladesh --age 6 --sort rate --save /etc/pacman.d/mirrorlist
pacman -Sy
pacstrap -K /mnt base linux linux-firmware intel-ucode vim
genfstab -U /mnt >> /mnt/etc/fstab
cat /mnt/etc/fstab
Explanation:
reflector --country Bangladesh --age 6 --sort rate --save /etc/pacman.d/mirrorlist: Updates the mirrorlist file with the fastest mirrors in Bangladesh (replace with your desired country).pacman -Sy: Synchronizes package databases.pacstrap -K /mnt base linux linux-firmware intel-ucode vim: Installs essential packages (replace with any additional packages you may need).genfstab -U /mnt >> /mnt/etc/fstab: Generates an fstab file based on the current disk configuration.cat /mnt/etc/fstab: Displays the contents of the generated fstab file for verification.
7. Configuring the New Installation (arch-chroot)
Enter the newly installed system for configuration:
arch-chroot /mnt
ls
ln -sf /usr/share/zoneinfo/Asia/Dhaka /etc/localtime
hwclock --systohc
vim /etc/locale.gen
locale-gen
echo "LANG=en_US.UTF-8" >> /etc/locale.conf
echo "KEYMAP=us" >> /etc/vconsole.conf
vim /etc/hostname
passwd
pacman -S grub-btrfs efibootmgr networkmanager network-manager-applet dialog wpa_supplicant mtools dosfstools reflector base-devel linux-headers bluez bluez-utils cups hplip alsa-utils pipewire pipewire-alsa pipewire-pulse pipewire-jack bash-completion openssh rsync acpi acpi_call tlp sof-firmware acpid os-prober ntfs-3g
Explanation:
arch-chroot /mnt: Changes the root to the newly installed system (/mnt).ls: Lists the contents of the root directory to verify the chroot environment.ln -sf /usr/share/zoneinfo/Asia/Dhaka /etc/localtime: Creates a symbolic link from the systemโs time zone file to /etc/localtime, setting the systemโs time zone to โAsia/Dhakaโ (replace with your desired time zone).hwclock --systohc: Sets the hardware clock from the system clock.vim /etc/locale.gen: Opens the locale.gen file for editing.- Uncomment the line containing โen_US.UTF-8โ by removing the leading โ#โ character.
locale-gen: Generates the locales based on the uncommented entries in locale.gen.echo "LANG=en_US.UTF-8" >> /etc/locale.conf: Sets the LANG variable in locale.conf to โen_US.UTF-8โ.echo "KEYMAP=us" >> /etc/vconsole.conf: Sets the KEYMAP variable in vconsole.conf to โusโ (replace with your desired keyboard layout).vim /etc/hostname: Opens the hostname file for editing.- Set the hostname to โarchโ (replace with your desired hostname).
passwd: Sets the root password.pacman -S grub efibootmgr networkmanager network-manager-applet dialog wpa_supplicant mtools dosfstools reflector base-devel linux-headers bluez bluez-utils cups hplip alsa-utils pipewire pipewire-alsa pipewire-pulse pipewire-jack bash-completion openssh rsync acpi acpi_call tlp sof-firmware acpid os-prober ntfs-3g: Installs various packages necessary for the system, including GRUB, network management tools, Bluetooth support, printer support, audio utilities, and other useful packages. Adjust the list based on your requirements.
8. Edit The Mkinitcpio File For Encrypt
vim /etc/mkinitcpio.confand search for HOOKS;- add encrypt (before filesystems hook);
- add
atkbdto the MODULES (enables external keyboard at device decryption prompt); - add
btrfsto the MODULES; and, - recreate the
mkinitcpio -p linux
9. Grub Installation
Install and configure Grub:
grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=GRUB
grub-mkconfig -o /boot/grub/grub.cfg
vim /etc/default/grub
grub-mkconfig -o /boot/grub/grub.cfg
- run blkid and obtain the UUID for the main partitin:
blkid /dev/vda1 - edit the grub config
nvim /etc/default/grub GRUB_CMDLINE_LINUX_DEFAULT="loglevel=3 quiet cryptdevice=UUID=d33844ad-af1b-45c7-9a5c-cf21138744b4:main root=/dev/mapper/main- make the grub config with
grub-mkconfig -o /boot/grub/grub.cfg
Explanation:
grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=GRUB: Installs GRUB bootloader on the EFI System partition (/dev/vda2) with the bootloader ID โGRUBโ.grub-mkconfig -o /boot/grub/grub.cfg: Generates the GRUB configuration file based on the installed operating systems.vim /etc/default/grub: Opens the GRUB configuration file for editing.- Uncomment the line with โos-proberโ by removing the leading โ#โ character. This allows GRUB to detect other installed operating systems.
grub-mkconfig -o /boot/grub/grub.cfg: Generates the GRUB configuration file again to include the changes made.
10. Enabling Systemd Services
Enable necessary systemd services:
systemctl enable NetworkManager
systemctl enable bluetooth
systemctl enable cups.service
systemctl enable sshd
systemctl enable tlp
systemctl enable reflector.timer
systemctl enable fstrim.timer
systemctl enable acpid
Explanation:
systemctl enable NetworkManager: Enables the NetworkManager service to manage network connections.systemctl enable bluetooth: Enables the Bluetooth service.systemctl enable cups.service: Enables the CUPS (Common Unix Printing System) service for printer support.systemctl enable sshd: Enables the SSH server for remote access.systemctl enable tlp: Enables the TLP service for power management.systemctl enable reflector.timer: Enables the Reflector timer to update the mirrorlist regularly.systemctl enable fstrim.timer: Enables the fstrim timer to trim the filesystem regularly.systemctl enable acpid: Enables the ACPI (Advanced Configuration and Power Interface) service.
11. Creating a New User
Create a new user and grant sudo access:
useradd -m akib
passwd akib
echo "akib ALL=(ALL) ALL" >> /etc/sudoers.d/akib
usermod -c 'Akib Ahmed' akib
exit
Explanation:
useradd -m akib: Creates a new user account named โakibโ with the-mflag to create the userโs home directory.passwd akib: Sets the password for the newly created user โakibโ.echo "akib ALL=(ALL) ALL" >> /etc/sudoers.d/akib: Grants sudo access to the user โakibโ by adding a sudoers file for the user.usermod -c 'Akib Ahmed' akib: Sets the userโs full name as โAkib Ahmedโ (replace with the desired full name).exit: Exits the chroot environment.
12. Finishing the Installation
Unmount partitions and reboot the system:
umount -R /mnt
reboot
Explanation:
umount -R /mnt: Unmounts all the partitions mounted under /mnt.reboot: Reboots the system.
Once the system reboots, you can log in with the newly created user and continue the setup process.
13. Post-Installation Configuration
After logging in with the newly created user, perform the following steps:
nmtui
- Opens the NetworkManager Text User Interface (TUI) for managing network connections.
ip -c a
- Displays the IP addresses and network interfaces for verification.
grub-mkconfig -o /boot/grub/grub.cfg
- Generates the GRUB configuration file to include any changes made during the post-installation steps.
sudo pacman -S git
- Installs the Git package.
git clone https://aur.archlinux.org/yay-bin.git
- Clones the Yay AUR (Arch User Repository) package from the AUR repository.
ls
cd yay-bin/
makepkg -si
cd
- Changes directory to the cloned โyay-binโ directory, builds the package, and installs it using
makepkg.
yay
- Verifies the successful installation of Yay by running the command.
yay -S timeshift-bin timeshift-autosnap
- Installs the Timeshift packages from the AUR using Yay.
sudo timeshift --list-devices
- Lists the available devices for creating Timeshift snapshots.
sudo timeshift --snapshot-device /dev/vda1
- Sets the device (/dev/vda1) to be used for creating Timeshift snapshots.
sudo timeshift --create --comments "First Backup" --tags D
- Creates a Timeshift snapshot with a comment and assigns it the โDโ tag for easy identification.
sudo grub-mkconfig -o /boot/grub/grub.cfg
- Generates the GRUB configuration file again to include any changes made during the post-installation steps.
Ensure you have read and understood each step before proceeding. These additional steps cover various post-installation configurations, including network setup, package installation with Yay, and creating a Timeshift backup.
Happy Arch Linux configuration! ๐ง
Gentoo Installation Guide
This comprehensive guide provides a detailed walkthrough for installing Gentoo Linux. Adjustments may be required based on your specific hardware and preferences.
Prerequisites
- A reliable internet connection.
- A virtual or physical machine with a target disk (e.g., /dev/vdx).
1. Check Internet Connection
Make sure your internet connection is working:
ping -c 5 www.google.com
2. Disk Partitioning
Partition your disk using fdisk:
fdisk /dev/vdx
Follow these steps in fdisk:
- Press
gfor GPT partition. - Create partitions for boot, swap, and root using
n. - Change partition labels using
t: set boot to EFI, swap to Linux swap.
Format partitions:
mkfs.vfat -F 32 /dev/vdx1
mkswap /dev/vdx2
swapon /dev/vdx2
mkfs.ext4 /dev/vdx4
Mount the root partition:
mkdir -p /mnt/gentoo
mount /dev/sda3 /mnt/gentoo
3. Installing a Stage Tarball
Navigate to the Gentoo mirrors and download the stage3 tarball:
cd /mnt/gentoo
links https://www.gentoo.org/downloads/mirrors/
tar xpvf stage3-*.tar.xz --xattrs-include='*.*' --numeric-owner
vi /mnt/gentoo/etc/portage/make.conf
In make.conf, specify your CPU architecture and core:
COMMON_FLAGS="-march=alderlake -O2 -pipe"
MAKEOPTS="-j8"
FEATURES="candy parallel-fetch parallel-install"
ACCEPT_LICENSE="*"
4. Installing the Gentoo Base System
Select a mirror:
mirrorselect -i -o >> /mnt/gentoo/etc/portage/make.conf
Create necessary directories:
mkdir -p /mnt/gentoo/etc/portage/repos.conf
cp /mnt/gentoo/usr/share/portage/config/repos.conf /mnt/gentoo/etc/portage/repos.conf/gentoo.conf
cp --dereference /etc/resolv.conf /mnt/gentoo/etc/
Mount essential filesystems:
mount --types proc /proc /mnt/gentoo/proc
mount --rbind /sys /mnt/gentoo/sys
mount --make-rslave /mnt/gentoo/sys
mount --rbind /dev /mnt/gentoo/dev
mount --make-rslave /mnt/gentoo/dev
mount --bind /run /mnt/gentoo/run
mount --make-slave /mnt/gentoo/run
Chroot into the new environment:
chroot /mnt/gentoo /bin/bash
source /etc/profile
export PS1="(chroot) ${PS1}"
Mount the EFI boot partition:
mkdir /efi
mount /dev/sda1 /efi
5. Configuring Portage Package Manager of Gentoo
emerge-webrsync
emerge --sync
emerge --sync --quiet
eselect profile list
eselect profile set 9
emerge --ask --verbose --update --deep --newuse @world
nano /etc/portage/make.conf
In make.conf, add USE flags:
USE="-gtk -gnome qt5 kde dvd alsa cdr"
Create a package.license directory and edit the kernel license:
mkdir /etc/portage/package.license
nvim /etc/portage/package.license/kernel
Add the following licenses:
app-arch/unrar unRAR
sys-kernel/linux-firmware @BINARY-REDISTRIBUTABLE
sys-firmware/intel-microcode intel-ucode
6. Timezone and Locale Configuration
Set your timezone:
ls /usr/share/zoneinfo
echo "Asia/Dhaka" > /etc/timezone
emerge --config sys-libs/timezone-data
Configure locales:
emerge app-editors/neovim
nvim /etc/locale.gen
Uncomment the necessary locales and set the default:
en_US ISO-8859-1
en_US.UTF-8 UTF-8
Set the locale:
locale-gen
eselect locale list
eselect locale set 6
env-update && source /etc/profile && export PS1="(chroot) ${PS1}"
7. Configuring the Kernel
emerge --ask sys-kernel/linux-firmware
emerge --ask sys-kernel/gentoo-sources
eselect kernel list
eselect kernel set 1
emerge --ask sys-apps/pciutils
cd /usr/src/linux
make menuconfig
make && make modules_install
make install
Alternatively, use Genkernel:
emerge --ask sys-kernel/linux-firmware
emerge --ask sys-kernel/genkernel
genkernel --mountboot --install all
ls /boot/vmlinu* /boot/initramfs*
ls /lib/modules
Or, Use the binary kernel:
emerge --ask sys-kernel/gentoo-kernel
emerge --ask --autunmust-write sys-kernel/gentoo-kernel-bin
etc-update
emerge -a sys-kernel/gentoo-kernel-bin
8. Configuring Fstab and Networking
Edit fstab to reflect your disk configuration:
neovim /etc/fstab
Add entries for EFI, swap, and root partitions:
/dev/vdx1 /efi vfat defaults 0 2
/dev/vdx2 none swap sw 0 0
/dev/vdx3 / ext4 defaults,noatime 0 1
Configure networking:
echo virt > /etc/hostname
emerge --ask --noreplace net-misc/netifrc
nvim /etc/conf.d/net
Add your network configuration:
config_enp1s0="dhcp"
Set networking to start at boot:
cd /etc/init.d
ln -s net.lo net.enp1s0
rc-update add net.enp1s0 default
9. Editing Hosts and System Configuration
Edit the hosts file:
nano /etc/hosts
Add or edit the hosts file with appropriate entries:
127.0.0.1 virt localhost
::1 virt localhost
Set system information:
passwd
nano /etc/conf.d/hwclock
Edit hwclock configuration:
clock="local"
10. System Logger and Additional Software
emerge --ask app-admin/sysklogd
rc-update add sysklogd default
rc-update add sshd default
nano -w /etc/inittab
Add SERIAL CONSOLES configuration:
s0:12345:respawn:/sbin/agetty 9600 ttyS0 vt100
s1:12345:respawn:/sbin/agetty 9600 ttyS1 vt100
Install additional software:
emerge --ask sys-fs/e2fsprogs
emerge --ask sys-block/io-scheduler-udev-rules
emerge --ask net-misc/dhcpcd
emerge --ask net-dialup/ppp
emerge --ask net-wireless/iw net-wireless/wpa_supplicant
11. Boot Loader
echo 'GRUB_PLATFORMS="efi-64"' >> /etc/portage/make.conf
emerge --ask --verbose sys-boot/grub
grub-install --target=x86_64-efi --efi-directory=/efi
grub-mkconfig -o /boot/grub/grub.cfg
exit
cd
umount -l /mnt/gentoo/dev{/shm,/pts,}
umount -R /mnt/gentoo
reboot
12. Adding a User for Daily Use
useradd -m -G users,wheel,audio -s /bin/bash akib
passwd akib
Removing Tarballs
rm /stage3-*.tar.*
13. Sound (PipeWire) Setup
emerge -av media-libs/libpulse
emerge --ask media-video/pipewire
emerge --ask media-video/wireplumber
usermod -aG pipewire akib
emerge --ask sys-auth/rtkit
usermod -rG audio akib
mkdir /etc/pipewire
cp /usr/share/pipewire/pipewire.conf /etc/pipewire/pipewire.conf
mkdir ~/.config/pipewire
cp /usr/share/pipewire/pipewire.conf ~/.config/pipewire/pipewire.conf
Add the following configuration to ~/.config/pipewire/pipewire.conf:
context.properties = {
default.clock.rate = 192000
default.clock.allowed-rates = [ 192000 48000 44100 ] # Up to 16 can be specified
}
14. Xorg Setup
Edit /etc/portage/make.conf and add the following:
USE="X"
INPUT_DEVICES="libinput synaptics"
VIDEO_CARDS="nouveau"
VIDEO_CARDS="radeon"
Install Xorg drivers and server:
emerge --ask --verbose x11-base/xorg-drivers
emerge --ask x11-base/xorg-server
env-update
source /etc/profile
15. Setting up Display Manager (SDDM)
emerge --ask x11-misc/sddm
usermod -a -G video sddm
vim /etc/sddm.conf
Add the following lines:
[X11]
DisplayCommand=/etc/sddm/scripts/Xsetup
Create /etc/sddm/scripts/Xsetup:
mkdir -p /etc/sddm/scripts
chmod a+x /etc/sddm/scripts/Xsetup
Edit /etc/conf.d/xdm and add:
DISPLAYMANAGER="sddm"
Enable the display manager at boot:
rc-update add xdm default
emerge --ask gui-libs/display-manager-init
vim /etc/conf.d/display-manager
From there add,
CHECKVT=7
DISPLAYMANAGER="sddm"
after that add it to the service,
rc-update add display-manager default
rc-service display-manager start
16. Desktop Installation (KDE Plasma)
eselect profile list
eselect profile set X
Set the number according to the desktop environment you want. For KDE Plasma:
emerge --ask kde-plasma/plasma-meta
emerge konsole
emerge firefox-bin
Create ~/.xinitrc and add:
#!/bin/sh
exec dbus-launch --exit-with-session startplasma-x11
Feel free to customize this guide further based on your specific needs and preferences.
This guide is designed to provide a comprehensive and detailed walkthrough for installing Gentoo Linux. Feel free to customize it further based on your specific needs and preferences.
Tools
The Complete Linux & Bash Command-Line Guide
Master the Linux command line from first principles to advanced automation. This comprehensive guide organizes commands by what you want to accomplish, making it your go-to reference whether youโre taking your first steps or optimizing complex workflows.
๐งญ Table of Contents
- Foundations: Understanding the Command Line
- Navigation: Finding Your Way Around
- File Operations: Creating, Moving, and Deleting
- Reading and Viewing Files
- Searching: Finding Files and Text
- Advanced Text Processing: Power Tools
- Users, Permissions, and Access Control
- Process and System Management
- Networking Essentials
- Archives and Compression
- Bash Scripting: Automating Tasks
- Input/Output Redirection
- Advanced Techniques and Power User Features
- Troubleshooting and Debugging
1. Foundations: Understanding the Command Line
The Anatomy of a Command
Every Linux command follows a predictable pattern that, once understood, unlocks the entire system:
command -options arguments
command: The program or tool youโre invoking (likelsto list files)-options: Modifiers that change behavior, also called flags or switches (like-lfor โlong formatโ)arguments: What you want the command to operate on (like/home/user)
Example breakdown:
ls -la /var/log
โ โ โโ argument (which directory)
โ โโโโโ options (long format + all files)
โโโโโโโโ command (list contents)
The Pipe: Your Most Powerful Tool
The pipe operator | is the cornerstone of command-line productivity. It channels the output of one command directly into the input of another, letting you chain simple tools into sophisticated operations.
cat server.log | grep "ERROR" | wc -l
What happens here:
catoutputs the entire log file|feeds that output togrepgrepfilters for lines containing โERRORโ|feeds those filtered lines towcwc -lcounts how many lines remain
Think of pipes as assembly lines: each command does one thing well, then passes its work to the next station.
Essential Survival Skills
Getting Help When Youโre Stuck
man command_name
The man (manual) command is your built-in encyclopedia. Every standard command has a manual page explaining its purpose, options, and usage. Navigate with arrow keys, search with /search_term, and quit with q.
โ ๏ธ Common Mistake: Forgetting that man exists and searching online first. While web searches are valuable, man pages are authoritative, always available offline, and specific to your systemโs version.
Quick reference alternatives:
command --helporcommand -h: Brief usage summary (faster thanman)apropos keyword: Search all manual pages for a keyword
Tab Completion: Stop Typing So Much
Press Tab at any point while typing a command, filename, or path. The shell will:
- Complete the word if thereโs only one match
- Show you all possibilities if there are multiple matches
- Save you from typos and help you discover available options
Pro tip: Double-tap Tab twice quickly to see all possible completions without typing anything.
Quoting Rules That Matter
Quotes arenโt stylisticโthey fundamentally change how the shell interprets your input:
Double quotes ": The shell expands variables and substitutions
echo "Hello, $USER" # Outputs: Hello, akib
echo "Current dir: $(pwd)" # Outputs: Current dir: /home/akib
Single quotes ': Everything is literalโno expansions occur
echo 'Hello, $USER' # Outputs: Hello, $USER
echo 'Cost: $50' # Outputs: Cost: $50
When to use which:
- Use double quotes by default for strings containing variables
- Use single quotes when you want literal text (like in
sedorawkpatterns) - Use no quotes for simple, single-word arguments
The sudo Privilege System
Linux protects critical system operations by requiring administrator privileges. Rather than logging in as the dangerous โrootโ user, use sudo to execute individual commands with elevated rights:
sudo apt update # Update package lists (requires admin)
sudo reboot # Restart the system
How it works: sudo (Superuser Do) temporarily grants your command root privileges. Youโll be prompted for your password the first time, then you have a grace period (typically 15 minutes) before it asks again.
โ ๏ธ Warning: With great power comes great responsibility. sudo can break your system if misused. Always double-check commands that start with sudo.
2. Navigation: Finding Your Way Around
Understanding Where You Are
The Linux filesystem is a tree structure. Unlike Windows with its separate drives (C:, D:), everything branches from a single root /.
pwd
Print Working Directory shows your current location:
/home/akib/projects/website
Best practice: Run pwd when youโre disoriented. Itโs free and instant.
Seeing Whatโs Around You
ls
The list command shows directory contents, but itโs far more powerful with options:
ls -la
This is the command youโll use 90% of the time:
-l: Long format showing permissions, owner, size, date-a: Show all files, including hidden ones (starting with.)
Output anatomy:
drwxr-xr-x 5 akib akib 4096 Oct 24 10:30 Documents
-rw-r--r-- 1 akib akib 2048 Oct 23 15:42 notes.txt
โโโโโโโโโ โ โ โ โ โ โโ filename
โโโโโโโโโ โ โ โ โ โโโโโโโโโโโโ modification date
โโโโโโโโโ โ โ โ โโโโโโโโโโโโโโโโโ size in bytes
โโโโโโโโโ โ โ โโโโโโโโโโโโโโโโโโโโโโโ group
โโโโโโโโโ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโ owner
โโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ number of links
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ permissions (others)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ permissions (group)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ permissions (owner)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ execute/search
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ write
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ read
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ file type (d=directory, -=file)
Useful variations:
ls -lh: Human-readable sizes (2.1M instead of 2048576)ls -lt: Sort by time (newest first)ls -lS: Sort by Size (largest first)ls -lR: Recursive (show subdirectories too)
Moving Between Directories
cd directory_name
Change Directory is your navigation command. It understands both absolute and relative paths:
Absolute paths start from root /:
cd /var/log # Go directly to /var/log from anywhere
Relative paths start from your current location:
cd Documents # Go into Documents subdirectory
cd ../Downloads # Go up one level, then into Downloads
cd ../../shared/data # Go up two levels, then down a different branch
Special shortcuts:
cd # Go to your home directory (/home/username)
cd ~ # Same as above (~ means "home")
cd - # Return to previous directory (like "back" button)
cd .. # Go up one directory level
cd ../.. # Go up two levels
โ ๏ธ Common Mistake: Forgetting that cd without arguments takes you home. If you accidentally run cd and lose your place, use cd - to get back.
Advanced Navigation: The Directory Stack
For power users who jump between multiple locations:
pushd /var/log # Save current location, jump to /var/log
pushd ~/projects # Save /var/log, jump to ~/projects
dirs # View the stack
popd # Return to /var/log
popd # Return to original location
Why use this? When youโre working across multiple directory trees (e.g., comparing logs in /var/log with configs in /etc while editing code in ~/projects), the directory stack is faster than repeatedly typing full paths.
Clearing the Clutter
clear
Clears your terminal screen without affecting your work. Useful when output has become overwhelming.
Keyboard shortcut: Ctrl+L does the same thing (faster than typing).
3. File Operations: Creating, Moving, and Deleting
Creating Files
touch filename.txt
Creates an empty file or updates the timestamp on an existing file. While touch seems simple, itโs essential for:
- Creating placeholder files
- Resetting modification times
- Testing write permissions in a directory
Why the name โtouchโ? It โtouchesโ the file, updating its access time without modifying contents.
Creating Directories
mkdir new_folder
Make Directory creates a new folder. But the real power comes with options:
mkdir -p path/to/deeply/nested/folder
The -p (parents) flag creates all intermediate directories automatically. Without it, youโd need to create each level separately:
# Without -p (tedious):
mkdir path
mkdir path/to
mkdir path/to/deeply
mkdir path/to/deeply/nested
mkdir path/to/deeply/nested/folder
# With -p (elegant):
mkdir -p path/to/deeply/nested/folder
Best practice: Always use -p unless you specifically want an error when parent directories donโt exist.
Copying Files and Directories
cp source.txt destination.txt
Copy creates a duplicate of a file:
# Copy and rename:
cp report.txt report_backup.txt
# Copy to another directory (keeping same name):
cp report.txt ~/Documents/
# Copy to another directory with new name:
cp report.txt ~/Documents/final_report.txt
For directories, use -r (recursive):
cp -r project/ project_backup/
Without -r, youโll get an error: cp: -r not specified; omitting directory 'project/'
Useful options:
-i: Interactiveโprompt before overwriting-v: Verboseโshow whatโs being copied-u: Updateโonly copy if source is newer than destination-a: Archive modeโpreserves permissions, timestamps, and structure (ideal for backups)
Pro tip: Combine flags for safety and visibility:
cp -riv source/ destination/
Moving and Renaming
mv old_name.txt new_name.txt
Move serves double duty:
Renaming (destination in same directory):
mv draft.txt final.txt
Moving (destination in different directory):
mv final.txt ~/Documents/
Moving and renaming simultaneously:
mv draft.txt ~/Documents/final.txt
Moving directories (no -r flag needed):
mv old_folder/ new_location/
โ ๏ธ Warning: Unlike cp, mv doesnโt have a built-in way to prevent overwriting. Use -i for safety:
mv -i source.txt destination.txt # Prompts if destination exists
Deleting Files
rm filename.txt
Remove permanently deletes files. There is no โRecycle Binโ or โTrashโ on the command lineโonce removed, files are gone.
โ ๏ธ CRITICAL WARNING: The most dangerous command in Linux is:
sudo rm -rf /
NEVER RUN THIS. It recursively (-r) and forcefully (-f) deletes everything on your system, including the operating system itself.
Safe deletion practices:
# Delete a single file:
rm old_file.txt
# Delete with confirmation:
rm -i file.txt # Prompts before deleting
# Delete multiple files:
rm file1.txt file2.txt file3.txt
# Delete directories (requires -r):
rm -r old_folder/
# Force deletion without prompts (use cautiously):
rm -rf temporary_folder/
Protecting yourself:
- Always double-check the path before using
-r - Use
lsfirst to verify what youโre about to delete - Never use
-rftogether unless youโre certain - Consider aliasing
rmtorm -iin your.bashrcfor an automatic safety net
Alternative for empty directories:
rmdir empty_folder/
This only works on empty directories, providing a safety check against accidental deletion.
Creating Links
ln -s /path/to/original /path/to/link
Links are like shortcuts or references. The -s creates a symbolic (soft) linkโthe most commonly used type.
Symbolic links point to a file path:
ln -s /var/www/html/index.php ~/index_link.php
Now you can edit ~/index_link.php and the changes affect the original file in /var/www/html/.
Real-world use cases:
- Creating shortcuts to deeply nested files
- Maintaining multiple versions (link to the current version)
- Organizing files without duplicating them
- Cross-referencing configurations
Viewing links:
ls -l
# Output: lrwxrwxrwx ... index_link.php -> /var/www/html/index.php
# ^
# 'l' indicates it's a link
Hard links (without -s) create a direct reference to file data:
ln original.txt hardlink.txt
Hard links are less common because they have limitations (canโt span filesystems, canโt link directories).
Identifying File Types
file mysterious_file
Determines what type of file something is, regardless of its extension (or lack thereof):
$ file script
script: POSIX shell script, ASCII text executable
$ file image.jpg
image.jpg: JPEG image data, JFIF standard 1.01
$ file compiled_program
compiled_program: ELF 64-bit LSB executable, x86-64
Why this matters: Unix doesnโt rely on file extensions like Windows does. A file named document could be a text file, an image, or a program. The file command examines the actual content to tell you what it is.
Verifying File Integrity
md5sum filename
sha256sum filename
These commands generate cryptographic โfingerprintsโ (checksums) of files:
$ sha256sum ubuntu-22.04.iso
b8f31413336b9393ad5d8ef0282717b2ab19f007df2e9ed5196c13d8f9153c8b ubuntu-22.04.iso
Use cases:
- Verify downloaded files havenโt been corrupted or tampered with
- Check if two files are identical without comparing them byte-by-byte
- Detect changes in files (checksums change if even one bit changes)
Verification workflow:
# Download a file and its checksum:
wget https://example.com/file.zip
wget https://example.com/file.zip.sha256
# Verify:
sha256sum -c file.zip.sha256
# Output: file.zip: OK (means it matches)
4. Reading and Viewing Files
Quick Output: cat
cat filename.txt
Concatenate dumps the entire contents of a file to your screen instantly. Perfect for short files or when you need to pipe content to another command.
Multiple files:
cat file1.txt file2.txt file3.txt # Shows all files in sequence
Combining files:
cat part1.txt part2.txt > complete.txt
โ ๏ธ Common Mistake: Using cat on large files. If you accidentally run cat on a gigabyte-sized log file, your terminal will freeze while it tries to display millions of lines. Use less instead.
Quick tips:
cat -n: Number all linescat -A: Show all special characters (tabs, line endings, etc.)
Interactive Viewing: less
less large_file.log
A powerful pager for viewing files of any size. Unlike cat, it doesnโt load the entire file into memoryโyou can view gigabyte-sized files instantly.
Essential controls:
SpacebarorPageDown: Next pageborPageUp: Previous pageg: Jump to beginningG: Jump to end/search_term: Search forward?search_term: Search backwardn: Next search resultN: Previous search resultq: Quit
Why โlessโ is more:
The name is a play on an older program called more. The joke: โless is more than more,โ meaning less has more features than more.
Pro tips:
less +F file.log # Start in "follow" mode (like tail -f)
# Press Ctrl+C to stop following, then navigate normally
First and Last Lines
head filename.txt # First 10 lines
tail filename.txt # Last 10 lines
Custom line counts:
head -n 50 access.log # First 50 lines
tail -n 100 error.log # Last 100 lines
The killer feature: tail -f
tail -f /var/log/syslog
The -f (follow) flag watches a file in real-time, displaying new lines as theyโre added. This is indispensable for:
- Monitoring live log files
- Watching build processes
- Debugging applications in real-time
Stop following: Press Ctrl+C
Pro tip: Follow multiple files simultaneously:
tail -f /var/log/nginx/access.log /var/log/nginx/error.log
Reverse Text: rev
rev filename.txt
Reverses each line character-by-character:
Input: Hello World
Output: dlroW olleH
Practical use? Honestly, itโs rarely used except for:
- Fun text manipulation
- Certain data processing tasks
- Reversing accidentally reversed text
The Universal Editor: vi/vim
vi filename.txt
Vi (and its improved version, Vim) is the most universally available text editorโpresent on virtually every Unix-like system. Even if it seems arcane at first, knowing vi basics is essential for system administration.
Bare minimum survival guide:
-
Opening:
vi filename -
Modes:
- Normal mode (default): For navigation and commands
- Insert mode: For typing text (press
ito enter) - Command mode: For saving/quitting (press
:to enter)
-
Basic workflow:
- Press
ito start inserting text - Type your content
- Press
Escto return to Normal mode - Type
:wqand pressEnterto write and quit
- Press
-
Emergency exit:
- If youโre stuck: Press
Escseveral times, then type:q!and pressEnter :q!quits without saving (overriding any warnings)
- If youโre stuck: Press
Why learn vi?
- Itโs the only editor guaranteed to be present on remote servers
- Itโs powerful once you overcome the initial learning curve
- Many modern IDEs offer vim keybindings because theyโre efficient
Alternatives if vi isnโt your thing:
nano: Simpler, more intuitive for beginnersemacs: Powerful but requires installation on some systems
5. Searching: Finding Files and Text
Searching Inside Files: grep
grep "search_term" filename.txt
Global Regular Expression Print is your text search workhorse. It scans files line-by-line and outputs matching lines.
Basic examples:
# Find error messages in a log:
grep "ERROR" application.log
# Case-insensitive search:
grep -i "warning" system.log # Matches WARNING, Warning, warning
# Show line numbers:
grep -n "TODO" script.sh
# Output: 42:# TODO: Fix this later
# Invert match (show lines that DON'T match):
grep -v "DEBUG" app.log # Hide debug messages
# Count matches:
grep -c "success" results.txt
# Output: 127
Recursive search through directories:
grep -r "config_value" /etc/
This searches through all files in /etc/ and its subdirectoriesโincredibly powerful for finding where a setting is defined.
Advanced options:
-A 3: Show 3 lines After each match (context)-B 3: Show 3 lines Before each match-C 3: Show 3 lines of Context (both before and after)-E: Use extended regular expressions (more powerful patterns)-w: Match whole words only-x: Match whole lines only (exact)
Real-world power move:
grep -rn "import pandas" ~/projects/ --include="*.py"
Find all Python files in your projects that import pandas, showing line numbers.
โ ๏ธ Common Pitfall: Forgetting that grep returns an exit code. This matters in scripts:
if grep -q "error" log.txt; then
echo "Errors found!"
fi
The -q (quiet) flag suppresses outputโwe only care about the exit code.
Searching for Files: find
find /starting/path -name "pattern"
While grep searches inside files, find searches for files themselves based on name, size, type, permissions, modification time, and more.
Search by name:
# Find all .log files:
find /var/log -name "*.log"
# Case-insensitive name search:
find /home -iname "*.JPG" # Matches .jpg, .JPG, .Jpg, etc.
Search by type:
find /etc -type f # Only files
find /tmp -type d # Only directories
find /dev -type l # Only symbolic links
Search by time:
# Modified in last 7 days:
find . -mtime -7
# Modified more than 30 days ago:
find . -mtime +30
# Modified exactly 5 days ago:
find . -mtime 5
# Accessed in last 24 hours:
find /var/log -atime -1
Search by size:
# Files larger than 100MB:
find /home -size +100M
# Files smaller than 1KB:
find . -size -1k
# Files between 10MB and 50MB:
find . -size +10M -size -50M
Combining criteria (AND logic is default):
# Large log files modified recently:
find /var/log -name "*.log" -size +10M -mtime -7
Executing commands on found files:
# Delete all .tmp files:
find /tmp -name "*.tmp" -delete
# Change permissions on all scripts:
find ~/scripts -name "*.sh" -exec chmod +x {} \;
# More efficient with xargs (see Section 6):
find . -name "*.txt" -print0 | xargs -0 wc -l
โ ๏ธ Warning: find with -delete or -exec rm is powerful and dangerous. Always test without the destructive action first:
# Test first:
find /tmp -name "*.tmp"
# If output looks right:
find /tmp -name "*.tmp" -delete
Pro tipโexcluding directories:
# Search but ignore node_modules:
find . -name "*.js" -not -path "*/node_modules/*"
Fast File Locating: locate
locate filename
Blazing fast filename search that works across your entire system. How? It searches a pre-built database instead of scanning the filesystem in real-time.
Advantages over find:
- Incredibly fast (sub-second searches across millions of files)
- Simple syntax
Disadvantages:
- Database may be outdated (usually updated daily)
- Only searches by filename (no size, time, or content filtering)
Updating the database:
sudo updatedb
Run this after creating or deleting many files if you need locate to find them immediately.
Case-insensitive search:
locate -i document.pdf
Limiting results:
locate -n 20 readme # Show only first 20 matches
When to use locate vs. find:
- Use
locatewhen you vaguely remember a filename and need quick results - Use
findwhen you need precise criteria (size, date, type) or the database might be stale
Finding Commands: apropos
apropos "search term"
Searches through man page descriptions to find relevant commands:
$ apropos "copy files"
cp (1) - copy files and directories
cpio (1) - copy files to and from archives
rsync (1) - fast, versatile, remote file-copying tool
Use case: โI need to do X, but I donโt know which commandโฆโ Just ask apropos.
Exact keyword match:
apropos -e networking
Comparing Files
Line-by-line comparison: diff
diff file1.txt file2.txt
Shows exactly what changed between two files:
3c3
< This is the old line
---
> This is the new line
7d6
< This line was deleted
Unified format (more readable):
diff -u file1.txt file2.txt
Side-by-side comparison:
diff -y file1.txt file2.txt
Comparing directories:
diff -r directory1/ directory2/
Practical use: Code reviews, configuration audits, troubleshooting changes.
Byte-by-byte comparison: cmp
cmp file1.bin file2.bin
Unlike diff (which compares text line-by-line), cmp compares files byte-by-byte. Essential for binary files like images, videos, or compiled programs.
Silent check (just the exit code):
cmp -s file1 file2 && echo "Files are identical"
Comparing sorted files: comm
comm file1.txt file2.txt
Requires both files to be sorted. Outputs three columns:
- Lines only in file1
- Lines only in file2
- Lines in both files
Suppress columns:
comm -12 file1.txt file2.txt # Show only lines in both (intersection)
comm -23 file1.txt file2.txt # Show only lines unique to file1
6. Advanced Text Processing: Power Tools
These commands transform raw text into structured information. Theyโre the secret sauce behind command-line productivity.
Stream Editor: sed
sed 's/old/new/' filename.txt
Stream Editor performs find-and-replace and other transformations as text flows through it.
Basic substitution:
# Replace first occurrence per line:
sed 's/cat/dog/' pets.txt
# Replace all occurrences (g for global):
sed 's/cat/dog/g' pets.txt
# Replace and save to new file:
sed 's/cat/dog/g' pets.txt > updated_pets.txt
# Edit file in-place:
sed -i 's/cat/dog/g' pets.txt
โ ๏ธ Warning: -i modifies the original file. Use -i.bak to create a backup:
sed -i.bak 's/cat/dog/g' pets.txt # Creates pets.txt.bak
Delete lines:
# Delete line 5:
sed '5d' file.txt
# Delete lines 10-20:
sed '10,20d' file.txt
# Delete lines matching a pattern:
sed '/^#/d' script.sh # Remove comment lines
sed '/^$/d' file.txt # Remove blank lines
Print specific lines:
# Print line 42:
sed -n '42p' large_file.txt
# Print lines 10-20:
sed -n '10,20p' file.txt
Multiple operations:
sed -e 's/cat/dog/g' -e 's/red/blue/g' file.txt
Real-world exampleโconfiguration file update:
# Change database host in config:
sed -i 's/DB_HOST=localhost/DB_HOST=db.example.com/g' config.env
Pattern Scanner: awk
awk '{print $1}' file.txt
AWK is a complete programming language designed for text processing. Its superpower: effortlessly handling column-based data.
Understanding AWKโs model:
- AWK processes text line-by-line
- Each line is split into fields (columns)
$1is the first field,$2is the second, etc.$0is the entire line
Basic field extraction:
# Print first column:
ls -l | awk '{print $9}' # Filenames only
# Print multiple columns:
ls -l | awk '{print $9, $5}' # Filename and size
# Reorder columns:
echo "John Doe 30" | awk '{print $3, $1, $2}'
# Output: 30 John Doe
Custom field separators:
# Default separator is whitespace, but you can change it:
awk -F':' '{print $1}' /etc/passwd # Print all usernames
# Using comma as separator:
awk -F',' '{print $2}' data.csv
Conditional processing:
# Print lines where column 3 is greater than 100:
awk '$3 > 100' data.txt
# Print lines matching a pattern:
awk '/ERROR/ {print $1, $4}' log.txt
# Combine conditions:
awk '$3 > 100 && $5 == "active"' data.txt
Mathematical operations:
# Sum all numbers in column 2:
awk '{sum += $2} END {print sum}' numbers.txt
# Average:
awk '{sum += $1; count++} END {print sum/count}' data.txt
# Count lines:
awk 'END {print NR}' file.txt # NR = Number of Records (lines)
Real-world examples:
Analyze access logs:
# Count requests per IP:
awk '{print $1}' access.log | sort | uniq -c | sort -nr | head
# Total bandwidth transferred (column 10 is bytes):
awk '{sum += $10} END {print sum/1024/1024 " MB"}' access.log
Parse CSV data:
# Extract email addresses from CSV:
awk -F',' '{print $3}' contacts.csv
# Filter high-value transactions:
awk -F',' '$4 > 1000 {print $1, $2, $4}' transactions.csv
Pro tip: AWK can replace many pipes:
# Instead of: cat file | grep pattern | awk '{print $2}'
# Just use:
awk '/pattern/ {print $2}' file
Simple Column Cutter: cut
cut -d',' -f1 data.csv
A simpler alternative to AWK for basic column extraction:
Extract specific fields:
# Field 1 (default delimiter is tab):
cut -f1 file.txt
# Fields 1 and 3:
cut -f1,3 file.txt
# Field range:
cut -f2-5 file.txt
# Custom delimiter:
cut -d':' -f1 /etc/passwd # Extract usernames
cut -d',' -f2,4 data.csv # Extract columns 2 and 4 from CSV
Character-based extraction:
# First 10 characters of each line:
cut -c1-10 file.txt
# Characters 5 through 15:
cut -c5-15 file.txt
# Everything from character 20 onward:
cut -c20- file.txt
When to use cut vs. awk:
- Use
cutfor simple, single-delimiter column extraction - Use
awkfor complex conditions, calculations, or multiple delimiters
Sorting Lines: sort
sort filename.txt
Arranges lines alphabetically or numerically:
Basic sorting:
# Alphabetical (default):
sort names.txt
# Reverse order:
sort -r names.txt
# Numeric sort (critical for numbers):
sort -n numbers.txt
Why -n matters:
# Without -n (alphabetical):
echo -e "1\n10\n2\n20" | sort
# Output: 1, 10, 2, 20 (wrong!)
# With -n (numeric):
echo -e "1\n10\n2\n20" | sort -n
# Output: 1, 2, 10, 20 (correct!)
Sort by specific column:
# Sort by second column, numerically:
sort -k2 -n data.txt
# Sort by third column, reverse:
sort -k3 -r data.txt
# Multiple sort keys:
sort -k1,1 -k2n data.txt # Sort by column 1, then by column 2 numerically
Advanced options:
# Ignore leading blanks:
sort -b file.txt
# Case-insensitive:
sort -f names.txt
# Human-readable numbers (understands K, M, G):
du -h * | sort -h
# Random shuffle:
sort -R file.txt
# Unique sort (remove duplicates while sorting):
sort -u file.txt
Real-world exampleโfind largest directories:
du -sh * | sort -h | tail -10
Remove Duplicate Lines: uniq
uniq file.txt
Removes adjacent duplicate linesโthis is crucial to understand.
โ ๏ธ Critical Pitfall: uniq only removes duplicates that are next to each other:
# This WON'T work as expected:
echo -e "apple\nbanana\napple" | uniq
# Output: apple, banana, apple (duplicate remains!)
# This WILL work:
echo -e "apple\nbanana\napple" | sort | uniq
# Output: apple, banana
Best practice: Always pipe through sort first:
sort file.txt | uniq
Count occurrences:
sort file.txt | uniq -c
# Output:
# 3 apple
# 1 banana
# 2 cherry
Show only duplicates:
sort file.txt | uniq -d
Show only unique lines (no duplicates):
sort file.txt | uniq -u
Real-world examples:
Count unique visitors in access log:
awk '{print $1}' access.log | sort | uniq | wc -l
Find most common error messages:
grep ERROR app.log | sort | uniq -c | sort -nr | head -10
Character Translation: tr
tr 'abc' 'xyz'
Translates or deletes charactersโworks on standard input only:
Character substitution:
# Convert lowercase to uppercase:
echo "hello world" | tr 'a-z' 'A-Z'
# Output: HELLO WORLD
# Convert uppercase to lowercase:
echo "HELLO WORLD" | tr 'A-Z' 'a-z'
# Output: hello world
# ROT13 encoding:
echo "Hello" | tr 'A-Za-z' 'N-ZA-Mn-za-m'
Delete characters:
# Remove all digits:
echo "Phone: 555-1234" | tr -d '0-9'
# Output: Phone: -
# Remove all spaces:
echo "too many spaces" | tr -d ' '
# Output: toomanyspaces
# Remove newlines:
cat multiline.txt | tr -d '\n'
Squeeze repeated characters:
# Collapse multiple spaces to single space:
echo "too many spaces" | tr -s ' '
# Output: too many spaces
# Remove duplicate letters:
echo "bookkeeper" | tr -s 'a-z'
# Output: bokeper
Complement (invert the set):
# Keep only alphanumeric characters:
echo "Hello, World! 123" | tr -cd 'A-Za-z0-9'
# Output: HelloWorld123
# Remove everything except newlines (one word per line):
cat file.txt | tr -cs 'A-Za-z' '\n'
Real-world uses:
Convert DOS line endings to Unix:
tr -d '\r' < dos_file.txt > unix_file.txt
Generate random passwords:
tr -dc 'A-Za-z0-9!@#$%' < /dev/urandom | head -c 20
Word, Line, and Byte Counting: wc
wc filename.txt
Word Count provides statistics about text:
Default output:
$ wc document.txt
45 312 2048 document.txt
โ โ โ โโ filename
โ โ โโโโโโโ bytes
โ โโโโโโโโโโโ words
โโโโโโโโโโโโโโโ lines
Specific counts:
wc -l file.txt # Lines only (most common)
wc -w file.txt # Words only
wc -c file.txt # Bytes only
wc -m file.txt # Characters (may differ from bytes with Unicode)
wc -L file.txt # Length of longest line
Multiple files:
$ wc -l *.txt
100 file1.txt
200 file2.txt
150 file3.txt
450 total
Real-world examples:
Count files in directory:
ls | wc -l
Count lines of code in project:
find . -name "*.py" -exec cat {} \; | wc -l
Monitor log growth rate:
# Before:
wc -l app.log
# ... wait some time ...
# After:
wc -l app.log # Compare the numbers
Count occurrences of a pattern:
grep -r "TODO" src/ | wc -l
Pipe Splitter: tee
command | tee output.txt
Splits a pipeline: sends output to both a file and the screen (or next command).
Basic usage:
# See output AND save it:
ls -la | tee file_list.txt
# Long-running commandโmonitor and save:
./build_script.sh | tee build.log
Append instead of overwrite:
echo "New entry" | tee -a log.txt
Multiple outputs:
echo "Important" | tee file1.txt file2.txt file3.txt
Combining with sudo:
# This WON'T work (sudo doesn't apply to redirection):
sudo echo "nameserver 8.8.8.8" > /etc/resolv.conf
# This WILL work:
echo "nameserver 8.8.8.8" | sudo tee /etc/resolv.conf
# Append with sudo:
echo "option timeout:1" | sudo tee -a /etc/resolv.conf
Real-world patternโsave and continue processing:
# Save intermediate results while continuing pipeline:
cat data.txt | tee raw_data.txt | grep "ERROR" | tee errors.txt | wc -l
Pro tipโsilent output:
# Save to file without screen output:
command | tee file.txt > /dev/null
Argument Builder: xargs
command1 | xargs command2
Converts input into arguments for another command. This solves a fundamental problem: many commands donโt read from standard inputโthey need arguments.
The problem xargs solves:
# This doesn't work (rm doesn't read filenames from stdin):
find . -name "*.tmp" | rm
# This works:
find . -name "*.tmp" | xargs rm
Basic usage:
# Delete files returned by find:
find . -name "*.log" | xargs rm
# Create directories:
echo "dir1 dir2 dir3" | xargs mkdir
# Download multiple URLs:
cat urls.txt | xargs wget
Handling spaces and special characters:
# UNSAFE (breaks with spaces in filenames):
find . -name "*.txt" | xargs rm
# SAFE (use null delimiter):
find . -name "*.txt" -print0 | xargs -0 rm
The -print0 and -0 combination uses null bytes (\0) as delimiters instead of spaces, making it safe for filenames with spaces, quotes, or other special characters.
Control execution:
# Run command once per item (-n 1):
echo "file1 file2 file3" | xargs -n 1 echo "Processing:"
# Output:
# Processing: file1
# Processing: file2
# Processing: file3
# Parallel execution (-P):
find . -name "*.jpg" | xargs -P 4 -I {} convert {} {}.optimized.jpg
# Processes 4 images simultaneously
Interactive prompting:
# Confirm before each execution:
find . -name "*.tmp" | xargs -p rm
# Prompts: rm ./file1.tmp?...
Replace string:
# Use {} as placeholder:
find . -name "*.txt" | xargs -I {} cp {} {}.backup
# Custom placeholder:
cat hostnames.txt | xargs -I HOST ssh HOST "df -h"
Real-world examples:
Batch rename files:
ls *.jpeg | xargs -I {} bash -c 'mv {} $(echo {} | sed s/jpeg/jpg/)'
Check which servers are up:
cat servers.txt | xargs -I {} -P 10 ping -c 1 {}
Find and replace across multiple files:
grep -l "old_term" *.txt | xargs sed -i 's/old_term/new_term/g'
Compress large files in parallel:
find . -name "*.log" -size +100M -print0 | xargs -0 -P 4 gzip
7. Users, Permissions, and Access Control
Linux is a multi-user system with robust permission controls. Understanding these concepts is essential for both security and day-to-day operations.
Identifying Yourself
whoami
Shows your current username:
$ whoami
akib
When it matters: After using su to switch users, or in scripts where you need to check whoโs running the code.
Detailed User Information
id
Displays your user ID (UID), group ID (GID), and all group memberships:
$ id
uid=1000(akib) gid=1000(akib) groups=1000(akib),27(sudo),998(docker)
What this tells you:
uid=1000(akib): Your user ID is 1000, username is โakibโgid=1000(akib): Your primary group ID is 1000, group name is โakibโgroups=...: Youโre also in the โsudoโ and โdockerโ groups
Why it matters: Group membership determines what you can access. Being in the โsudoโ group means you can run admin commands. Being in the โdockerโ group means you can run Docker containers without sudo.
Check another user:
id username
List Group Memberships
groups
Simpler than idโjust lists group names:
$ groups
akib sudo docker www-data
Check another userโs groups:
groups username
Execute as Administrator: sudo
sudo command
Superuser Do lets you run individual commands with root privileges:
# Install software:
sudo apt install nginx
# Edit system files:
sudo nano /etc/hosts
# Restart services:
sudo systemctl restart apache2
# View protected files:
sudo cat /var/log/auth.log
How it works:
- You enter your password (not rootโs password)
- System checks if youโre in the
sudogroup - Command runs with root privileges
- Your password is cached for ~15 minutes
Running multiple commands:
# Start a root shell:
sudo -i # Login shell (loads root's environment)
sudo -s # Shell (preserves your environment)
# Run specific shell as root:
sudo bash
Run as different user:
sudo -u username command
Preserve environment variables:
sudo -E command # Keeps your environment
Best practices:
- Only use
sudowhen necessary - Never run untrusted scripts with
sudo - Review what a command does before adding
sudo - Use
sudo -ifor multiple admin tasks, thenexitwhen done
โ ๏ธ Security Warning: The phrase โwith great power comes great responsibilityโ was practically invented for sudo. One mistyped command can destroy your system.
Switch Users: su
su username
Substitute User switches your entire session to another account:
# Become root:
su
# or
su root
# Become another user:
su - john # The dash loads john's environment
Difference from sudo:
surequires the target userโs passwordsudorequires your passwordsuswitches your entire sessionsudoruns one command
Why sudo is preferred:
- More auditable (logs show who did what)
- More granular (can limit what commands users can run)
- Doesnโt require sharing the root password
- Automatically times out
Return to original user:
exit
Understanding File Permissions
Every file and directory has permissions that control who can read, write, or execute it.
Viewing permissions:
$ ls -l script.sh
-rwxr-xr-- 1 akib developers 2048 Oct 24 10:30 script.sh
โโโโโโโโโ
โโโโโโโโโ
โโโโโโโโโดโ Other users: r-- (read only)
โโโโโโโโโโโ Group: r-x (read and execute)
โโโโโโโโโโโ Owner: rwx (read, write, execute)
โโโโโโโโโโโ ACLs indicator
โโโโโโโโโโโ Number of hard links
โโโโโโโโโโโ File type: - (regular file)
โโโโโโโโโโโ Also applies to: d (directory), l (link)
Permission breakdown:
- r (read): View file contents / List directory contents
- w (write): Modify file / Create/delete files in directory
- x (execute): Run file as program / Enter directory
Three permission sets:
- Owner (user who created the file)
- Group (users in the fileโs group)
- Others (everyone else)
Changing Permissions: chmod
chmod permissions file
Symbolic method (human-readable):
# Add execute permission for owner:
chmod u+x script.sh
# Remove write permission for others:
chmod o-w document.txt
# Add read permission for group:
chmod g+r data.txt
# Set exact permissions:
chmod u=rwx,g=rx,o=r file.txt
# Multiple changes:
chmod u+x,g+x,o-w script.sh
Symbols:
u= user (owner)g= groupo= othersa= all (user, group, and others)
Operators:
+= add permission-= remove permission== set exact permission
Octal method (numeric):
Each permission set is represented by a three-digit octal number:
r = 4
w = 2
x = 1
Add them up:
7(4+2+1) = rwx6(4+2) = rw-5(4+1) = r-x4= rโ0= โ
Common patterns:
# rwxr-xr-x (755): Owner full, others read/execute
chmod 755 script.sh
# rw-r--r-- (644): Owner read/write, others read-only
chmod 644 document.txt
# rwx------ (700): Only owner can access
chmod 700 private_script.sh
# rw-rw-r-- (664): Owner and group can edit, others read
chmod 664 shared_doc.txt
Recursive (apply to all files in directory):
chmod -R 755 /var/www/html/
Real-world examples:
Make script executable:
chmod +x deploy.sh
./deploy.sh # Now you can run it
Secure SSH keys:
chmod 600 ~/.ssh/id_rsa # Private keys must be owner-only
chmod 644 ~/.ssh/id_rsa.pub # Public keys can be readable
Fix web server permissions:
# Directories: 755 (browsable)
find /var/www -type d -exec chmod 755 {} \;
# Files: 644 (readable)
find /var/www -type f -exec chmod 644 {} \;
Changing Ownership: chown
chown owner:group file
Changes who owns a file:
# Change owner only:
sudo chown john file.txt
# Change owner and group:
sudo chown john:developers file.txt
# Change group only:
sudo chown :developers file.txt
# or use chgrp:
sudo chgrp developers file.txt
# Recursive:
sudo chown -R www-data:www-data /var/www/html/
Why you need sudo: Only root can change file ownership (security feature).
Real-world use case: After extracting files as root, change ownership to regular user:
sudo tar -xzf archive.tar.gz
sudo chown -R $USER:$USER extracted_folder/
Fix web application permissions:
# Web server needs to own web files:
sudo chown -R www-data:www-data /var/www/myapp/
# But you need to edit them:
sudo usermod -aG www-data $USER # Add yourself to www-data group
Changing Your Password
passwd
Prompts you to change your password:
$ passwd
Changing password for akib.
Current password:
New password:
Retype new password:
passwd: password updated successfully
Change another userโs password (as root):
sudo passwd username
Password requirements:
- Usually minimum 8 characters
- Mix of letters, numbers, symbols
- Not based on dictionary words
- Different from previous passwords
Best practices:
- Use a password manager
- Use strong, unique passwords for each system
- Enable two-factor authentication when available
- Change passwords periodically, especially after security incidents
8. Process and System Management
Understanding and controlling what your system is doing.
Viewing Processes: ps
ps
Process Status shows currently running processes:
Basic output:
$ ps
PID TTY TIME CMD
1234 pts/0 00:00:00 bash
5678 pts/0 00:00:00 ps
Show all processes:
ps aux # BSD style (no dash)
ps -ef # Unix style (with dash)
Both show similar informationโchoose whichever you prefer.
Understanding ps aux output:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
akib 1234 0.5 2.1 123456 12345 pts/0 S 10:30 0:05 python app.py
โ โ โ โ โ โ โ โ โ โ โโ command
โ โ โ โ โ โ โ โ โ โโโโโโโ CPU time used
โ โ โ โ โ โ โ โ โโโโโโโโโโโโโโโ start time
โ โ โ โ โ โ โ โโโโโโโโโโโโโโโโโโโโ state
โ โ โ โ โ โ โโโโโโโโโโโโโโโโโโโโโโโโโ terminal
โ โ โ โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ resident memory (KB)
โ โ โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ virtual memory (KB)
โ โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ % of RAM
โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ % of CPU
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ process ID
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ user
Process states:
R: RunningS: Sleeping (waiting for an event)D: Uninterruptible sleep (usually I/O)Z: Zombie (finished but not cleaned up)T: Stopped (paused)
Find specific processes:
ps aux | grep python
ps aux | grep -i apache
Show process tree (parent-child relationships):
ps auxf # Forest view
pstree # Dedicated tree view
Sort by CPU usage:
ps aux --sort=-%cpu | head
Sort by memory usage:
ps aux --sort=-%mem | head
Real-Time Process Monitoring: top and htop
top
Interactive, real-time view of system processes:
Essential top commands:
q: Quitk: Kill a process (prompts for PID)M: Sort by memory usageP: Sort by CPU usage1: Show individual CPU coresh: Helpu: Filter by usernameSpacebar: Refresh immediately
Understanding the top display:
top - 14:32:01 up 5 days, 2:17, 3 users, load average: 0.45, 0.62, 0.58
Tasks: 187 total, 1 running, 186 sleeping, 0 stopped, 0 zombie
%Cpu(s): 12.3 us, 3.1 sy, 0.0 ni, 84.1 id, 0.5 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem: 15842.5 total, 2341.2 free, 8234.7 used, 5266.6 buff/cache
MiB Swap: 2048.0 total, 2048.0 free, 0.0 used. 6892.4 avail Mem
Load average explained:
- Three numbers: 1-minute, 5-minute, 15-minute averages
- Represents number of processes waiting for CPU time
- On a 4-core system, load of 4.0 means fully utilized
- Load > number of cores = system is overloaded
Better alternative: htop
htop
A more user-friendly version with:
- Color-coded display
- Mouse support
- Easier process killing
- Tree view by default
- Better visual representation of CPU and memory
Install htop:
sudo apt install htop # Debian/Ubuntu
sudo yum install htop # Red Hat/CentOS
โ ๏ธ Common Mistake: Panicking when you see high CPU usage in top. Check if itโs legitimate activity before killing processes.
Terminating Processes: kill
kill PID
Sends signals to processesโusually to terminate them:
Basic usage:
# Graceful termination (SIGTERM):
kill 1234
# Force kill (SIGKILL):
kill -9 1234
Signal types:
SIGTERM(15, default): โPlease terminate gracefullyโ- Allows process to clean up (save files, close connections)
- Can be ignored by the process
SIGKILL(9): โDie immediatelyโ- Cannot be ignored or caught
- No cleanupโdata loss possible
- Use as last resort
Other useful signals:
kill -HUP 1234 # Hang up (often makes daemons reload config)
kill -STOP 1234 # Pause process
kill -CONT 1234 # Resume paused process
Kill by name:
killall process_name # Kill all processes with this name
pkill pattern # Kill processes matching pattern
Examples:
# Kill all Python processes:
killall python3
# Kill all processes owned by user:
pkill -u username
# Kill frozen Firefox:
killall -9 firefox
Finding the PID:
# Method 1:
ps aux | grep program_name
# Method 2:
pgrep program_name
# Method 3:
pidof program_name
โ ๏ธ Warning: Always try regular kill before kill -9. Forcing termination can lead to:
- Lost unsaved work
- Corrupted files
- Orphaned processes
- Resource leaks
Job Control: bg, fg, jobs
When you start a program from the terminal, itโs a โforeground jobโ that takes over your prompt. Job control lets you manage multiple programs.
Suspend current job:
Press Ctrl+Z to pause the foreground job:
$ python long_script.py
^Z
[1]+ Stopped python long_script.py
List jobs:
$ jobs
[1]+ Stopped python long_script.py
[2]- Running npm start &
Resume in foreground:
fg %1 # Resume job 1 in foreground
Resume in background:
bg %1 # Job 1 continues running, but you get your prompt back
Start job in background immediately:
long_running_command & # Ampersand runs it in background
Real-world workflow:
# Start editing a file:
vim document.txt
# Realize you need to check something:
# Press Ctrl+Z to suspend vim
# Run other commands:
ls -la
cat other_file.txt
# Go back to editing:
fg
# Or start a long task while editing:
bg # Continue vim in background (if it supports it)
โ ๏ธ Limitation: Background jobs still output to the terminal. For true detachment, use nohup or terminal multiplexers.
Run After Logout: nohup
nohup command &
No Hang Up makes a process immune to logoutโessential for long-running tasks on remote servers:
# Start a long backup:
nohup ./backup_script.sh &
# Start a development server:
nohup npm start &
# Output goes to nohup.out by default:
tail -f nohup.out
Redirect output:
nohup ./script.sh > output.log 2>&1 &
Explanation:
nohup: Ignore hangup signals> output.log: Redirect stdout2>&1: Redirect stderr to same place as stdout&: Run in background
Check if itโs running:
ps aux | grep script.sh
Better alternative for remote work: Use tmux or screen (see Advanced Techniques section).
Disk Space: df
df -h
Disk Free shows available disk space per filesystem:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 50G 35G 13G 74% /
/dev/sdb1 500G 350G 125G 74% /home
tmpfs 7.8G 1.2M 7.8G 1% /dev/shm
What it shows:
Filesystem: Device or partitionSize: Total capacityUsed: Space consumedAvail: Space remainingUse%: Percentage fullMounted on: Where itโs accessible in the directory tree
โ ๏ธ Warning: When a disk hits 100%, things break:
- Canโt save files
- Logs canโt write (applications fail)
- System becomes unstable
Quick checks:
df -h / # Check root partition
df -h /home # Check home partition
df -h --total # Show grand total
Find largest filesystems:
df -h | sort -h -k3 # Sort by usage
Directory Sizes: du
du -sh directory/
Disk Usage shows how much space files and directories consume:
# Summary of directory:
du -sh ~/Downloads/
# Output: 2.3G /home/akib/Downloads/
# Summarize each subdirectory:
du -sh ~/Documents/*
# Output:
# 150M /home/akib/Documents/Work
# 3.2G /home/akib/Documents/Projects
# 45M /home/akib/Documents/Personal
# Show all files and directories (recursive):
du -h ~/Projects/
Options:
-s: Summary (donโt show subdirectories)-h: Human-readable sizes-c: Show grand total--max-depth=N: Limit recursion depth
Find disk hogs:
# Top 10 largest directories:
du -sh /* | sort -h | tail -10
# Or more accurate:
du -h --max-depth=1 / | sort -h | tail -10
Find large files:
find / -type f -size +100M -exec du -h {} \; | sort -h
Real-world troubleshooting:
# "Disk full" alertโfind the culprit:
du -sh /* | sort -h | tail -5
# Drill down into the largest directory:
du -sh /var/* | sort -h | tail -5
# Continue until you find the problem:
du -sh /var/log/* | sort -h | tail -5
System Information: uname
uname -a
Shows kernel and system information:
$ uname -a
Linux myserver 5.15.0-56-generic #62-Ubuntu SMP Thu Nov 24 13:31:22 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Individual components:
uname -s # Kernel name: Linux
uname -n # Network name: myserver
uname -r # Kernel release: 5.15.0-56-generic
uname -v # Kernel version: #62-Ubuntu SMP Thu Nov 24...
uname -m # Machine hardware: x86_64
uname -o # Operating system: GNU/Linux
Practical use:
# Check if you're on 64-bit:
uname -m
# x86_64 = 64-bit, i686 = 32-bit
# Get kernel version for bug reports:
uname -r
Hostname
hostname
Shows or sets the systemโs network name:
$ hostname
myserver.example.com
# Show just the short name:
$ hostname -s
myserver
# Show IP addresses:
$ hostname -I
192.168.1.100 10.0.0.50
Change hostname (temporary):
sudo hostname newname
Change hostname (permanent):
# Ubuntu/Debian:
sudo hostnamectl set-hostname newname
# Older systems:
sudo nano /etc/hostname # Edit file
sudo nano /etc/hosts # Update 127.0.1.1 entry
System Shutdown and Reboot
reboot
shutdown
Control system power state (requires sudo):
Reboot immediately:
sudo reboot
Shutdown immediately:
sudo shutdown -h now
Shutdown with delay:
sudo shutdown -h +10 # Shutdown in 10 minutes
sudo shutdown -h 23:00 # Shutdown at 11 PM
Reboot with delay:
sudo shutdown -r +5 # Reboot in 5 minutes
Cancel scheduled shutdown:
sudo shutdown -c
Broadcast message to users:
sudo shutdown -h +10 "System maintenance in 10 minutes"
Alternative commands:
sudo poweroff # Immediate shutdown
sudo halt # Stop the system (older method)
sudo init 0 # Shutdown (runlevel 0)
sudo init 6 # Reboot (runlevel 6)
9. Networking Essentials
Testing Connectivity: ping
ping hostname
Checks if you can reach a remote host:
$ ping google.com
PING google.com (142.250.185.46) 56(84) bytes of data.
64 bytes from lga34s34-in-f14.1e100.net (142.250.185.46): icmp_seq=1 ttl=117 time=12.3 ms
64 bytes from lga34s34-in-f14.1e100.net (142.250.185.46): icmp_seq=2 ttl=117 time=11.8 ms
Understanding output:
64 bytes: Packet sizeicmp_seq: Packet sequence numberttl: Time To Live (hops remaining)time: Round-trip latency in milliseconds
Stop pinging:
Press Ctrl+C to stop. Youโll see statistics:
--- google.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 11.532/12.015/12.847/0.518 ms
Useful options:
# Send specific number of pings:
ping -c 4 google.com
# Set interval (1 second default):
ping -i 0.5 example.com # Ping every 0.5 seconds
# Flood ping (requires root):
sudo ping -f 192.168.1.1 # As fast as possible (testing)
# Set packet size:
ping -s 1000 example.com # 1000-byte packets
Troubleshooting scenarios:
No response:
$ ping 192.168.1.50
PING 192.168.1.50 (192.168.1.50) 56(84) bytes of data.
^C
--- 192.168.1.50 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss
Causes: Host down, network unreachable, firewall blocking ICMP
High latency:
time=523 ms # Should be <50ms for LAN, <100ms for internet
Causes: Network congestion, bad connection, routing issues
Packet loss:
10 packets transmitted, 7 received, 30% packet loss
Causes: Weak WiFi, network congestion, failing hardware
Remote Access: ssh
ssh user@hostname
Secure Shell connects you to remote Linux systems securely:
Basic connection:
ssh akib@192.168.1.100
ssh admin@server.example.com
Custom port:
ssh -p 2222 user@hostname
Execute single command:
ssh user@server "df -h"
ssh user@server "systemctl status nginx"
X11 forwarding (run GUI apps remotely):
ssh -X user@server
# Then run GUI programsโthey display on your local screen
Verbose output (troubleshooting):
ssh -v user@server # Verbose
ssh -vvv user@server # Very verbose
SSH config file (~/.ssh/config):
Make connections easier:
Host myserver
HostName server.example.com
User akib
Port 22
IdentityFile ~/.ssh/id_rsa
Host prod
HostName 203.0.113.50
User admin
Port 2222
Now just type:
ssh myserver
ssh prod
Key-based authentication (covered in Advanced Techniques):
- More secure than passwords
- No password typing required
- Essential for automation
โ ๏ธ Security Best Practices:
- Never use root account directly (use sudo instead)
- Disable password authentication (use keys only)
- Use non-standard ports
- Enable fail2ban to block brute-force attacks
- Keep SSH updated
File Synchronization: rsync
rsync source destination
Remote Sync is the Swiss Army knife of file copyingโefficient, powerful, and network-aware:
Basic local copy:
rsync -av source/ destination/
Essential options:
-a: Archive mode (preserves permissions, timestamps, symbolic links)-v: Verbose (show files being transferred)-z: Compress during transfer-h: Human-readable sizes-P: Show Progress + keep partial files
Best practice combination:
rsync -avzP source/ destination/
Remote copying:
# Upload to remote server:
rsync -avz /local/path/ user@server:/remote/path/
# Download from remote server:
rsync -avz user@server:/remote/path/ /local/path/
Important trailing slash behavior:
# With trailing slashโcopy CONTENTS:
rsync -av source/ destination/
# Result: destination contains files from source
# Without trailing slashโcopy DIRECTORY:
rsync -av source destination/
# Result: destination/source/ contains the files
Delete files in destination not in source:
rsync -av --delete source/ destination/
Dry run (preview what would happen):
rsync -avn --delete source/ destination/
# -n = dry run (no changes made)
Exclude files:
# Exclude pattern:
rsync -av --exclude '*.tmp' source/ dest/
# Multiple excludes:
rsync -av --exclude '*.log' --exclude 'node_modules/' source/ dest/
# Exclude file list:
rsync -av --exclude-from='exclude-list.txt' source/ dest/
Resume interrupted transfers:
rsync -avP source/ dest/ # -P enables partial file resumption
Real-world examples:
Backup entire home directory:
rsync -avzP --delete ~/ /mnt/backup/home/
Mirror website to remote server:
rsync -avz --delete /var/www/html/ user@webserver:/var/www/html/
Sync with bandwidth limit:
rsync -avz --bwlimit=1000 large-files/ user@server:/path/
# Limit to 1000 KB/s
Why rsync beats scp:
- Only transfers changed parts of files (delta transfer)
- Can resume interrupted transfers
- More options for filtering and control
- Better for large transfers or slow connections
Network Information: ip
ip addr show
Modern tool for viewing and configuring network interfaces (replaces older ifconfig):
Show all network interfaces:
$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
inet 127.0.0.1/8 scope host lo
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP
inet 192.168.1.100/24 brd 192.168.1.255 scope global eth0
Abbreviated versions:
ip a # Short for 'ip addr show'
ip addr # Same thing
ip link show # Show link-layer information
ip link # Abbreviated
Show specific interface:
ip addr show eth0
ip addr show wlan0
Show routing table:
ip route show
# or
ip r
Show statistics:
ip -s link # Interface statistics (packets, errors)
Common tasks:
Add IP address (temporary):
sudo ip addr add 192.168.1.50/24 dev eth0
Remove IP address:
sudo ip addr del 192.168.1.50/24 dev eth0
Bring interface up/down:
sudo ip link set eth0 up
sudo ip link set eth0 down
โ ๏ธ Note: Changes with ip are temporaryโtheyโre lost on reboot. Permanent changes require editing network configuration files (location varies by distribution).
Downloading Files: wget and curl
Both download files from the web, but with different philosophies:
wget: The Downloader
wget URL
Designed specifically for downloading files:
Basic download:
wget https://example.com/file.zip
Save with custom name:
wget -O custom_name.zip https://example.com/file.zip
Resume interrupted download:
wget -c https://example.com/large_file.iso
Download multiple files:
wget -i urls.txt # File containing list of URLs
Background download:
wget -b https://example.com/file.zip
tail -f wget-log # Monitor progress
Recursive download (mirror site):
wget -r -np -k https://example.com/docs/
# -r = recursive
# -np = no parent (don't go up in directory structure)
# -k = convert links for local viewing
Limit download speed:
wget --limit-rate=200k https://example.com/file.zip
Authentication:
wget --user=username --password=pass https://example.com/file.zip
curl: The Swiss Army Knife
curl URL
More versatileโcan handle uploads, APIs, and complex protocols:
Basic download (outputs to stdout):
curl https://example.com/file.txt
Save to file:
curl -o filename.txt https://example.com/file.txt
# or preserve remote filename:
curl -O https://example.com/file.txt
Follow redirects:
curl -L https://example.com/redirect
Show progress:
curl -# -O https://example.com/file.zip # Progress bar
API requests:
# GET request:
curl https://api.example.com/users
# POST request with data:
curl -X POST -d "name=John&email=john@example.com" https://api.example.com/users
# JSON POST:
curl -X POST -H "Content-Type: application/json" \
-d '{"name":"John","email":"john@example.com"}' \
https://api.example.com/users
# With authentication:
curl -u username:password https://api.example.com/data
Headers:
# Show response headers:
curl -i https://example.com
# Show only headers:
curl -I https://example.com
# Custom headers:
curl -H "Authorization: Bearer TOKEN" https://api.example.com/data
Upload files:
curl -F "file=@document.pdf" https://example.com/upload
When to use which:
- wget: Downloading files, mirroring websites, resume capability
- curl: API testing, complex requests, headers, uploads
10. Archives and Compression
The Tape Archive: tar
tar options archive.tar files
Originally designed for Tape Archives, tar bundles multiple files into a single file (without compression):
Essential operations:
Create archive:
tar -cvf archive.tar file1 file2 directory/
# -c = create
# -v = verbose
# -f = filename
Extract archive:
tar -xvf archive.tar
# -x = extract
List contents:
tar -tvf archive.tar
# -t = list
Compressed archives:
Most tar archives are also compressed. The flag indicates compression type:
Gzip (.tar.gz or .tgz):
# Create:
tar -czvf archive.tar.gz directory/
# -z = gzip compression
# Extract:
tar -xzvf archive.tar.gz
# Extract to specific directory:
tar -xzvf archive.tar.gz -C /target/directory/
Bzip2 (.tar.bz2):
# Create (better compression, slower):
tar -cjvf archive.tar.bz2 directory/
# -j = bzip2 compression
# Extract:
tar -xjvf archive.tar.bz2
XZ (.tar.xz):
# Create (best compression, slowest):
tar -cJvf archive.tar.xz directory/
# -J = xz compression
# Extract:
tar -xJvf archive.tar.xz
Advanced options:
Exclude files:
tar -czvf backup.tar.gz --exclude='*.tmp' --exclude='node_modules' ~/project/
Extract specific files:
tar -xzvf archive.tar.gz path/to/specific/file
Preserve permissions:
tar -cpzvf archive.tar.gz directory/
# -p = preserve permissions
Append to existing archive:
tar -rvf archive.tar newfile.txt
# -r = append
Update archive (only newer files):
tar -uvf archive.tar directory/
# -u = update
Mnemonic for remembering flags:
- Create: Create Zipped File โ
-czf - Extract: eXtract Zipped File โ
-xzf - List: Table of Verbose Files โ
-tvf
Real-world examples:
Backup home directory:
tar -czvf home-backup-$(date +%Y%m%d).tar.gz ~/
Backup with progress indicator:
tar -czvf backup.tar.gz directory/ --checkpoint=1000 --checkpoint-action=dot
Remote backup over SSH:
tar -czvf - directory/ | ssh user@server "cat > backup.tar.gz"
Extract while preserving everything:
sudo tar -xzvpf backup.tar.gz -C /
# -p = preserve permissions
# -C / = extract to root
Compression Tools
gzip/gunzip
gzip file.txt # Compresses to file.txt.gz (deletes original)
gunzip file.txt.gz # Decompresses (deletes .gz)
Keep original:
gzip -k file.txt
gunzip -k file.txt.gz
Compression levels:
gzip -1 file.txt # Fastest, least compression
gzip -9 file.txt # Slowest, best compression
View compressed file without extracting:
zcat file.txt.gz # View contents
zless file.txt.gz # View with pager
zgrep pattern file.txt.gz # Search compressed file
bzip2/bunzip2
bzip2 file.txt # Better compression than gzip
bunzip2 file.txt.bz2
Similar options to gzip (-k to keep, -1 to -9 for levels).
View compressed:
bzcat file.txt.bz2
bzless file.txt.bz2
zip/unzip
zip archive.zip file1 file2 directory/
unzip archive.zip
ZIP format (compatible with Windows):
Create archive:
# Files:
zip archive.zip file1.txt file2.txt
# Directories (recursive):
zip -r archive.zip directory/
# With compression level:
zip -9 -r archive.zip directory/ # Maximum compression
Extract archive:
# Current directory:
unzip archive.zip
# Specific directory:
unzip archive.zip -d /target/directory/
# List contents without extracting:
unzip -l archive.zip
# Extract specific file:
unzip archive.zip path/to/file.txt
Update existing archive:
zip -u archive.zip newfile.txt
Delete from archive:
zip -d archive.zip file-to-remove.txt
Password protection:
zip -e -r secure.zip directory/ # Prompts for password
unzip secure.zip # Prompts for password
11. Bash Scripting: Automating Tasks
Bash isnโt just an interactive shellโitโs a complete programming language for automation.
Script Basics
Create a script:
#!/bin/bash
# This is a comment
echo "Hello, World!"
Make it executable:
chmod +x script.sh
Run it:
./script.sh
The shebang: #!/bin/bash tells the system which interpreter to use.
Variables
Assignment:
name="John"
count=42
path="/home/user"
โ ๏ธ Critical: No spaces around =
name="John" # Correct
name = "John" # Wrong! This runs 'name' as a command
Using variables:
echo "Hello, $name"
echo "Count is: $count"
echo "Path: ${path}/documents" # Curly braces when needed
Command substitution:
current_date=$(date +%Y-%m-%d)
file_count=$(ls | wc -l)
user=$(whoami)
echo "Today is $current_date"
echo "You are $user"
Reading user input:
echo "Enter your name:"
read name
echo "Hello, $name!"
# Read with prompt:
read -p "Enter your age: " age
# Silent input (passwords):
read -sp "Enter password: " password
Environment variables:
echo $HOME # /home/username
echo $USER # username
echo $PATH # Executable search path
echo $PWD # Present working directory
echo $SHELL # Current shell
Special Parameters
$0 # Script name
$1 # First argument
$2 # Second argument
$9 # Ninth argument
${10} # Tenth argument (braces required for >9)
$@ # All arguments as separate strings
$* # All arguments as single string
$# # Number of arguments
$$ # Current process ID
$? # Exit code of last command
Example script:
#!/bin/bash
echo "Script name: $0"
echo "First argument: $1"
echo "All arguments: $@"
echo "Number of arguments: $#"
Usage:
$ ./script.sh apple banana cherry
Script name: ./script.sh
First argument: apple
All arguments: apple banana cherry
Number of arguments: 3
Exit Codes
Every command returns an exit code:
0= Success- Non-zero = Error
# Check last command's exit code:
ls /existing/directory
echo $? # Output: 0
ls /nonexistent/directory
echo $? # Output: 2 (error code)
Using in scripts:
#!/bin/bash
if cp source.txt dest.txt; then
echo "Copy successful"
else
echo "Copy failed"
exit 1 # Exit script with error code
fi
String Manipulation
text="Hello World"
# Length:
echo ${#text} # 11
# Substring (position:length):
echo ${text:0:5} # Hello
echo ${text:6} # World
# Replace first occurrence:
echo ${text/World/Universe} # Hello Universe
# Replace all occurrences:
fruit="apple apple apple"
echo ${fruit//apple/orange} # orange orange orange
# Remove prefix:
path="/home/user/document.txt"
echo ${path#*/} # home/user/document.txt (shortest match)
echo ${path##*/} # document.txt (longest match - basename)
# Remove suffix:
file="document.txt.backup"
echo ${file%.*} # document.txt (shortest match)
echo ${file%%.*} # document (longest match)
# Uppercase/Lowercase:
text="Hello"
echo ${text^^} # HELLO
echo ${text,,} # hello
Conditional Statements
if [[ condition ]]; then
# commands
elif [[ another_condition ]]; then
# commands
else
# commands
fi
File tests:
if [[ -e "/path/to/file" ]]; then
echo "File exists"
fi
if [[ -f "document.txt" ]]; then
echo "It's a regular file"
fi
if [[ -d "/home/user" ]]; then
echo "It's a directory"
fi
if [[ -r "file.txt" ]]; then
echo "File is readable"
fi
if [[ -w "file.txt" ]]; then
echo "File is writable"
fi
if [[ -x "script.sh" ]]; then
echo "File is executable"
fi
if [[ -s "file.txt" ]]; then
echo "File is not empty"
fi
String comparisons:
if [[ "$USER" == "akib" ]]; then
echo "Welcome, Akib"
fi
if [[ "$name" != "admin" ]]; then
echo "Not admin"
fi
if [[ -z "$variable" ]]; then
echo "Variable is empty"
fi
if [[ -n "$variable" ]]; then
echo "Variable is not empty"
fi
Numeric comparisons:
if [[ $count -eq 10 ]]; then
echo "Count is 10"
fi
if [[ $age -gt 18 ]]; then
echo "Adult"
fi
if [[ $num -lt 100 ]]; then
echo "Less than 100"
fi
if [[ $value -ge 50 ]]; then
echo "50 or more"
fi
if [[ $score -le 100 ]]; then
echo "100 or less"
fi
if [[ $result -ne 0 ]]; then
echo "Non-zero result"
fi
Logical operators:
# AND:
if [[ $age -gt 18 && $age -lt 65 ]]; then
echo "Working age"
fi
# OR:
if [[ "$user" == "admin" || "$user" == "root" ]]; then
echo "Privileged user"
fi
# NOT:
if [[ ! -f "config.txt" ]]; then
echo "Config file missing"
fi
Loops
For Loop
# Iterate over list:
for item in apple banana cherry; do
echo "Fruit: $item"
done
# Iterate over files:
for file in *.txt; do
echo "Processing $file"
# Do something with $file
done
# Iterate over command output:
for user in $(cat users.txt); do
echo "Creating account for $user"
done
# C-style loop:
for ((i=1; i<=10; i++)); do
echo "Number: $i"
done
# Range:
for i in {1..10}; do
echo $i
done
# Range with step:
for i in {0..100..10}; do
echo $i # 0, 10, 20, ..., 100
done
While Loop
# Basic while:
count=1
while [[ $count -le 5 ]]; do
echo "Count: $count"
((count++))
done
# Read file line by line:
while read -r line; do
echo "Line: $line"
done < input.txt
# Infinite loop:
while true; do
echo "Running..."
sleep 1
done
# Until loop (opposite of while):
count=1
until [[ $count -gt 5 ]]; do
echo "Count: $count"
((count++))
done
Functions
# Define function:
function greet() {
echo "Hello, $1!"
}
# Or without 'function' keyword:
greet() {
echo "Hello, $1!"
}
# Call function:
greet "World" # Output: Hello, World!
# With return value:
add() {
local result=$(($1 + $2))
echo $result
}
sum=$(add 5 3)
echo "Sum: $sum" # Sum: 8
# With explicit return code:
check_file() {
if [[ -f "$1" ]]; then
return 0 # Success
else
return 1 # Failure
fi
}
if check_file "document.txt"; then
echo "File exists"
fi
Arrays
# Create array:
fruits=("apple" "banana" "cherry")
# Access elements:
echo ${fruits[0]} # apple
echo ${fruits[1]} # banana
# All elements:
echo ${fruits[@]} # apple banana cherry
# Array length:
echo ${#fruits[@]} # 3
# Add element:
fruits+=("date")
# Loop through array:
for fruit in "${fruits[@]}"; do
echo $fruit
done
# Associative arrays (like dictionaries):
declare -A person
person[name]="John"
person[age]=30
person[city]="New York"
echo ${person[name]} # John
# Loop through keys:
for key in "${!person[@]}"; do
echo "$key: ${person[$key]}"
done
Practical Script Examples
Backup script:
#!/bin/bash
# Configuration
SOURCE="/home/user/documents"
DEST="/backup"
DATE=$(date +%Y%m%d_%H%M%S)
ARCHIVE="backup_$DATE.tar.gz"
# Create backup
echo "Starting backup..."
tar -czf "$DEST/$ARCHIVE" "$SOURCE"
if [[ $? -eq 0 ]]; then
echo "Backup successful: $ARCHIVE"
else
echo "Backup failed!"
exit 1
fi
# Delete backups older than 30 days
find "$DEST" -name "backup_*.tar.gz" -mtime +30 -delete
echo "Cleanup complete"
Log analyzer:
#!/bin/bash
LOG_FILE="/var/log/apache2/access.log"
echo "=== Top 10 IP Addresses ==="
awk '{print $1}' "$LOG_FILE" | sort | uniq -c | sort -nr | head -10
echo ""
echo "=== Top 10 Requested Pages ==="
awk '{print $7}' "$LOG_FILE" | sort | uniq -c | sort -nr | head -10
echo ""
echo "=== HTTP Status Codes ==="
awk '{print $9}' "$LOG_FILE" | sort | uniq -c | sort -nr
System monitoring:
#!/bin/bash
# Check if disk usage exceeds 80%
USAGE=$(df / | tail -1 | awk '{print $5}' | sed 's/%//')
if [[ $USAGE -gt 80 ]]; then
echo "WARNING: Disk usage is ${USAGE}%"
# Send email, SMS, etc.
fi
# Check if service is running
if ! systemctl is-active --quiet nginx; then
echo "ERROR: Nginx is not running"
sudo systemctl start nginx
fi
12. Input/Output Redirection
Control where commands read input and send output.
Output Redirection
Redirect stdout:
ls -la > file_list.txt # Overwrite
ls -la >> file_list.txt # Append
Redirect stderr:
command 2> errors.log # Only errors
command 2>> errors.log # Append errors
Redirect both stdout and stderr:
command &> output.log # Both to same file
command > output.log 2>&1 # Traditional syntax
command 2>&1 | tee output.log # Both to file and screen
Discard output:
command > /dev/null # Discard stdout
command 2> /dev/null # Discard stderr
command &> /dev/null # Discard both
Understanding file descriptors:
0= stdin (standard input)1= stdout (standard output)2= stderr (standard error)
Swap stdout and stderr:
command 3>&1 1>&2 2>&3
Input Redirection
# Feed file as input:
sort < unsorted.txt
# Here document (multi-line input):
cat << EOF > output.txt
Line 1
Line 2
Line 3
EOF
# Here string:
grep "pattern" <<< "text to search"
Practical Examples
Separate output and errors:
./script.sh > output.log 2> errors.log
Log everything:
./script.sh &> full.log
Show and log:
./script.sh 2>&1 | tee output.log
Silent execution:
cron_job.sh &> /dev/null
13. Advanced Techniques and Power User Features
SSH Key-Based Authentication
Eliminate passwords and enhance security:
1. Generate key pair (on local machine):
ssh-keygen -t ed25519 -C "your_email@example.com"
Or RSA for older systems:
ssh-keygen -t rsa -b 4096 -C "your_email@example.com"
Press Enter to accept defaults. Optionally set a passphrase.
2. Copy public key to server:
ssh-copy-id user@server.com
Or manually:
cat ~/.ssh/id_ed25519.pub | ssh user@server "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
3. Test:
ssh user@server.com # No password required!
Security hardening:
Edit /etc/ssh/sshd_config on server:
PasswordAuthentication no
PermitRootLogin no
PubkeyAuthentication yes
Restart SSH:
sudo systemctl restart sshd
Terminal Multiplexers: tmux and screen
Run persistent sessions that survive disconnections.
tmux Basics
Start session:
tmux
tmux new -s session_name
Detach from session:
Press Ctrl+B, then D
List sessions:
tmux ls
Attach to session:
tmux attach
tmux attach -t session_name
Essential tmux commands (prefix with Ctrl+B):
C: Create new windowN: Next windowP: Previous window0-9: Switch to window by number%: Split pane vertically": Split pane horizontally- Arrow keys: Navigate between panes
D: Detach from sessionX: Kill current pane&: Kill current window?: Show all keybindings
Workflow example:
# SSH into server:
ssh user@server.com
# Start tmux:
tmux new -s deployment
# Run long process:
./deploy_application.sh
# Detach: Ctrl+B, then D
# Log out: exit
# Later, reconnect:
ssh user@server.com
tmux attach -t deployment
# Your process is still running!
screen Basics
Start session:
screen
screen -S session_name
Detach from session:
Press Ctrl+A, then D
List sessions:
screen -ls
Attach to session:
screen -r
screen -r session_name
Essential screen commands (prefix with Ctrl+A):
C: Create new windowN: Next windowP: Previous window0-9: Switch to window by numberS: Split horizontally|: Split vertically (requires configuration)Tab: Switch between splitsD: DetachK: Kill current window?: Help
Why use multiplexers:
- Run long processes on remote servers without keeping SSH connected
- Organize multiple terminal windows in one interface
- Share sessions with other users (pair programming)
- Recover from network interruptions
Advanced Find Techniques
Find and execute complex operations:
# Find files older than 30 days and compress them:
find /var/log -name "*.log" -mtime +30 -exec gzip {} \;
# Find large files and show them sorted:
find / -type f -size +100M -exec ls -lh {} \; | sort -k5 -h
# Find and move files:
find . -name "*.tmp" -exec mv {} /tmp/ \;
# Find with multiple conditions:
find . -type f \( -name "*.log" -o -name "*.txt" \) -size +1M
# Find and confirm before deleting:
find . -name "*.bak" -ok rm {} \;
# Find files modified today:
find . -type f -mtime 0
# Find files by permissions:
find . -type f -perm 777 # Exactly 777
find . -type f -perm -644 # At least 644
# Find empty files and directories:
find . -empty
# Find by owner:
find /home -user john
# Find and change permissions:
find . -type f -name "*.sh" -exec chmod +x {} \;
Advanced xargs patterns:
# Process in batches:
find . -name "*.jpg" -print0 | xargs -0 -n 10 -P 4 process_images.sh
# Build complex commands:
find . -name "*.log" | xargs -I {} sh -c 'echo "Processing {}"; gzip {}'
# Handle special characters safely:
find . -name "* *" -print0 | xargs -0 rename
# Parallel processing:
find . -name "*.txt" -print0 | xargs -0 -P 8 -I {} sh -c 'wc -l {} | tee -a count.log'
Process Management Deep Dive
Advanced process inspection:
# Show process tree:
pstree -p # With PIDs
pstree -u # With usernames
# Find process by name:
pgrep -f "python app.py"
# Kill by name (careful!):
pkill -f "python app.py"
# Show threads:
ps -T -p PID
# Real-time process monitoring with filtering:
watch -n 1 'ps aux | grep python'
# CPU-consuming processes:
ps aux --sort=-%cpu | head -10
# Memory-consuming processes:
ps aux --sort=-%mem | head -10
# Process with specific state:
ps aux | awk '$8 ~ /^Z/ {print}' # Zombie processes
Nice and renice (process priority):
# Start with lower priority:
nice -n 10 ./cpu_intensive_task.sh
# Change priority of running process:
renice -n 5 -p PID
# Priority levels: -20 (highest) to 19 (lowest)
# Default: 0
Process signals:
kill -l # List all signals
# Common signals:
kill -TERM PID # Graceful termination (default)
kill -KILL PID # Force kill (same as kill -9)
kill -HUP PID # Hangup (reload config)
kill -STOP PID # Pause
kill -CONT PID # Resume
kill -USR1 PID # User-defined signal 1
kill -USR2 PID # User-defined signal 2
Advanced Text Processing Patterns
Complex awk programs:
# Print lines with specific field value:
awk '$3 > 100 && $5 == "active"' data.txt
# Calculate and format:
awk '{sum += $2} END {printf "Total: $%.2f\n", sum}' prices.txt
# Field manipulation:
awk '{print $2, $1}' file.txt | column -t # Swap and align
# Multiple patterns:
awk '/ERROR/ {errors++} /WARNING/ {warnings++} END {print "Errors:", errors, "Warnings:", warnings}' log.txt
# Process CSV with headers:
awk -F',' 'NR==1 {for(i=1;i<=NF;i++) header[i]=$i} NR>1 {print header[1]": "$1, header[2]": "$2}' data.csv
Sed scripting:
# Multiple substitutions:
sed -e 's/old1/new1/g' -e 's/old2/new2/g' file.txt
# Conditional replacement:
sed '/pattern/s/old/new/g' file.txt
# Delete range of lines:
sed '10,20d' file.txt
# Insert line before pattern:
sed '/pattern/i\New line here' file.txt
# Append line after pattern:
sed '/pattern/a\New line here' file.txt
# Change entire line:
sed '/pattern/c\Replacement line' file.txt
# Multiple commands from file:
sed -f commands.sed input.txt
Combining tools for complex parsing:
# Extract URLs from HTML:
grep -oP 'href="\K[^"]+' page.html | sort -u
# Parse JSON (with jq):
curl -s https://api.example.com/data | jq '.items[] | select(.status=="active") | .name'
# Parse log timestamps:
awk '{print $4}' access.log | cut -d: -f1 | sort | uniq -c
# Extract email addresses:
grep -oE '\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b' file.txt
Command History Tricks
Search history:
history | grep command # Search history
Ctrl+R # Reverse search (interactive)
!! # Repeat last command
!n # Run command number n
!-n # Run nth command from end
!string # Run most recent command starting with string
!?string # Run most recent command containing string
^old^new # Replace text in last command
History expansion:
# Reuse arguments:
!$ # Last argument of previous command
!* # All arguments of previous command
!^ # First argument of previous command
# Example:
ls /var/log/nginx/
cd !$ # Changes to /var/log/nginx/
Configure history:
# Add to ~/.bashrc:
export HISTSIZE=10000 # Commands in memory
export HISTFILESIZE=20000 # Commands in file
export HISTTIMEFORMAT="%F %T " # Add timestamps
export HISTCONTROL=ignoredups # Ignore duplicates
export HISTIGNORE="ls:cd:pwd" # Ignore specific commands
# Share history across terminals:
shopt -s histappend
PROMPT_COMMAND="history -a; history -c; history -r; $PROMPT_COMMAND"
Bash Aliases and Functions
Create aliases (add to ~/.bashrc):
# Navigation shortcuts:
alias ..='cd ..'
alias ...='cd ../..'
alias ....='cd ../../..'
# Safety nets:
alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'
# Common commands:
alias ll='ls -lah'
alias la='ls -A'
alias l='ls -CF'
alias grep='grep --color=auto'
# Git shortcuts:
alias gs='git status'
alias ga='git add'
alias gc='git commit'
alias gp='git push'
# System info:
alias ports='netstat -tulanp'
alias meminfo='free -m -l -t'
alias psg='ps aux | grep -v grep | grep -i -e VSZ -e'
# Safety:
alias mkdir='mkdir -pv'
# Reload bash config:
alias reload='source ~/.bashrc'
Create functions (more powerful than aliases):
# Extract any archive:
extract() {
if [ -f $1 ]; then
case $1 in
*.tar.bz2) tar xjf $1 ;;
*.tar.gz) tar xzf $1 ;;
*.bz2) bunzip2 $1 ;;
*.rar) unrar x $1 ;;
*.gz) gunzip $1 ;;
*.tar) tar xf $1 ;;
*.tbz2) tar xjf $1 ;;
*.tgz) tar xzf $1 ;;
*.zip) unzip $1 ;;
*.Z) uncompress $1 ;;
*.7z) 7z x $1 ;;
*) echo "'$1' cannot be extracted" ;;
esac
else
echo "'$1' is not a valid file"
fi
}
# Create and enter directory:
mkcd() {
mkdir -p "$1" && cd "$1"
}
# Quick backup:
backup() {
cp "$1" "$1.backup-$(date +%Y%m%d-%H%M%S)"
}
# Find and replace in files:
replace() {
grep -rl "$1" . | xargs sed -i "s/$1/$2/g"
}
# Show PATH one per line:
path() {
echo "$PATH" | tr ':' '\n'
}
Performance Optimization
Benchmark commands:
# Time command execution:
time command
# More detailed:
/usr/bin/time -v command
# Benchmark alternatives:
hyperfine "command1" "command2" # Install separately
Monitor system performance:
# I/O statistics:
iostat -x 1
# Disk activity:
iotop
# Network bandwidth:
iftop
nload
# System calls:
strace -c command
# Open files by process:
lsof -p PID
# System load:
uptime
w
Disk performance:
# Test write speed:
dd if=/dev/zero of=testfile bs=1M count=1000
# Test read speed:
dd if=testfile of=/dev/null bs=1M
# Clear cache before testing:
sudo sh -c "sync; echo 3 > /proc/sys/vm/drop_caches"
# Measure disk I/O:
sudo hdparm -Tt /dev/sda
Security Best Practices
File security:
# Find files with dangerous permissions:
find / -type f -perm -002 2>/dev/null # World-writable files
find / -type f -perm -4000 2>/dev/null # SUID files
# Secure SSH directory:
chmod 700 ~/.ssh
chmod 600 ~/.ssh/id_rsa
chmod 644 ~/.ssh/id_rsa.pub
chmod 644 ~/.ssh/authorized_keys
chmod 644 ~/.ssh/known_hosts
# Remove world permissions:
chmod o-rwx file
# Set restrictive umask:
umask 077 # New files: 600, directories: 700
Monitor security:
# Check for failed login attempts:
sudo grep "Failed password" /var/log/auth.log
# Show recent logins:
last
# Show currently logged-in users:
w
who
# Check for listening ports:
sudo netstat -tulpn
sudo ss -tulpn
# Review sudo usage:
sudo grep sudo /var/log/auth.log
Secure file deletion:
# Overwrite before deletion:
shred -vfz -n 3 sensitive_file.txt
# Wipe free space (use carefully):
# sfill -l /path/to/mount
Systemd Service Management
Control services:
# Start/stop/restart:
sudo systemctl start nginx
sudo systemctl stop nginx
sudo systemctl restart nginx
sudo systemctl reload nginx # Reload config without restart
# Enable/disable (start on boot):
sudo systemctl enable nginx
sudo systemctl disable nginx
# Check status:
sudo systemctl status nginx
sudo systemctl is-active nginx
sudo systemctl is-enabled nginx
# List all services:
systemctl list-units --type=service
systemctl list-units --type=service --state=running
# View logs:
sudo journalctl -u nginx
sudo journalctl -u nginx -f # Follow
sudo journalctl -u nginx --since "1 hour ago"
sudo journalctl -u nginx --since "2024-10-01" --until "2024-10-24"
# Failed services:
systemctl --failed
Create custom service:
# Create /etc/systemd/system/myapp.service:
[Unit]
Description=My Application
After=network.target
[Service]
Type=simple
User=myuser
WorkingDirectory=/opt/myapp
ExecStart=/opt/myapp/run.sh
Restart=on-failure
RestartSec=10
[Install]
WantedBy=multi-user.target
# Enable and start:
sudo systemctl daemon-reload
sudo systemctl enable myapp
sudo systemctl start myapp
Cron Job Automation
Edit crontab:
crontab -e # Edit your crontab
crontab -l # List your crontab
crontab -r # Remove your crontab
sudo crontab -u username -e # Edit another user's crontab
Crontab syntax:
# โโโโโโโโโโโโโโ minute (0-59)
# โ โโโโโโโโโโโโโโ hour (0-23)
# โ โ โโโโโโโโโโโโโโ day of month (1-31)
# โ โ โ โโโโโโโโโโโโโโ month (1-12)
# โ โ โ โ โโโโโโโโโโโโโโ day of week (0-6, Sunday=0)
# โ โ โ โ โ
# โ โ โ โ โ
# * * * * * command to execute
Common patterns:
# Every minute:
* * * * * /path/to/script.sh
# Every 5 minutes:
*/5 * * * * /path/to/script.sh
# Every hour:
0 * * * * /path/to/script.sh
# Daily at 2:30 AM:
30 2 * * * /path/to/script.sh
# Every Sunday at midnight:
0 0 * * 0 /path/to/script.sh
# First day of month:
0 0 1 * * /path/to/script.sh
# Weekdays at 6 AM:
0 6 * * 1-5 /path/to/script.sh
# Multiple times:
0 6,12,18 * * * /path/to/script.sh
# At system reboot:
@reboot /path/to/script.sh
# Special shortcuts:
@yearly # 0 0 1 1 *
@monthly # 0 0 1 * *
@weekly # 0 0 * * 0
@daily # 0 0 * * *
@hourly # 0 * * * *
Best practices for cron:
# Use absolute paths:
0 2 * * * /usr/bin/python3 /home/user/backup.py
# Redirect output:
0 2 * * * /path/to/script.sh > /var/log/script.log 2>&1
# Set environment variables:
PATH=/usr/local/bin:/usr/bin:/bin
SHELL=/bin/bash
0 2 * * * /path/to/script.sh
# Email results (if mail configured):
MAILTO=admin@example.com
0 2 * * * /path/to/script.sh
Regular Expressions Power
Grep with regex:
# Basic patterns:
grep '^Start' file.txt # Lines starting with "Start"
grep 'end$' file.txt # Lines ending with "end"
grep '^$' file.txt # Empty lines
grep '[0-9]' file.txt # Lines with digits
grep '[A-Z]' file.txt # Lines with uppercase
grep '[aeiou]' file.txt # Lines with vowels
# Extended regex (-E):
grep -E 'cat|dog' file.txt # cat OR dog
grep -E 'colou?r' file.txt # color or colour
grep -E '[0-9]+' file.txt # One or more digits
grep -E '[0-9]{3}' file.txt # Exactly 3 digits
grep -E '[0-9]{2,4}' file.txt # 2 to 4 digits
# Perl regex (-P):
grep -P '\d+' file.txt # Digits (\d)
grep -P '\w+' file.txt # Word characters (\w)
grep -P '\s+' file.txt # Whitespace (\s)
grep -P '(?=.*\d)(?=.*[a-z])' # Lookahead assertions
# Email addresses:
grep -E '\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b' file.txt
# IP addresses:
grep -E '\b([0-9]{1,3}\.){3}[0-9]{1,3}\b' file.txt
# URLs:
grep -E 'https?://[^\s]+' file.txt
Command Line Efficiency Tips
Keyboard shortcuts:
Ctrl+A # Move to beginning of line
Ctrl+E # Move to end of line
Ctrl+U # Delete from cursor to beginning
Ctrl+K # Delete from cursor to end
Ctrl+W # Delete word before cursor
Alt+D # Delete word after cursor
Ctrl+L # Clear screen (like 'clear')
Ctrl+R # Reverse search history
Ctrl+G # Escape from reverse search
Ctrl+C # Cancel current command
Ctrl+Z # Suspend current command
Ctrl+D # Exit shell (or send EOF)
!! # Repeat last command
sudo !! # Repeat last command with sudo
Quick edits:
# Fix typo in previous command:
^typo^correction
# Example:
$ grpe error log.txt
^grpe^grep
# Runs: grep error log.txt
Brace expansion:
# Create multiple files:
touch file{1..10}.txt
# Creates: file1.txt, file2.txt, ..., file10.txt
# Create directory structure:
mkdir -p project/{src,bin,lib,doc}
# Copy with backup:
cp file.txt{,.bak}
# Same as: cp file.txt file.txt.bak
# Multiple extensions:
rm file.{txt,log,bak}
Command substitution:
# Use command output in another command:
echo "Today is $(date)"
mv file.txt file.$(date +%Y%m%d).txt
# Nested:
echo "Files: $(ls $(pwd))"
14. Troubleshooting and Debugging
Common Problems and Solutions
โCommand not foundโ:
# Check if command exists:
which command_name
type command_name
# Check PATH:
echo $PATH
# Find where command is:
find / -name command_name 2>/dev/null
# Add to PATH temporarily:
export PATH=$PATH:/new/directory
# Add to PATH permanently (add to ~/.bashrc):
export PATH=$PATH:/new/directory
โPermission deniedโ:
# Check permissions:
ls -l file
# Make executable:
chmod +x script.sh
# Check ownership:
ls -l file
# Change ownership:
sudo chown user:group file
# Run with sudo:
sudo command
โNo space left on deviceโ:
# Check disk space:
df -h
# Find large directories:
du -sh /* | sort -h
# Find large files:
find / -type f -size +100M -exec ls -lh {} \;
# Clear package cache (Ubuntu/Debian):
sudo apt clean
# Clear systemd journal:
sudo journalctl --vacuum-time=7d
โToo many open filesโ:
# Check current limit:
ulimit -n
# Increase limit (temporary):
ulimit -n 4096
# Check what's using files:
lsof | wc -l
lsof -u username
# Permanent fix (edit /etc/security/limits.conf):
* soft nofile 4096
* hard nofile 8192
Process wonโt die:
# Try graceful kill:
kill PID
# Wait a bit, then force:
kill -9 PID
# If still alive, check:
ps aux | grep PID
# May be zombie (can't be killed, wait for parent):
ps aux | awk '$8 ~ /^Z/'
Debugging Scripts
Enable debugging:
#!/bin/bash -x # Print each command before executing
# Or:
set -x # Turn on debugging
# ... commands ...
set +x # Turn off debugging
# Strict mode (recommended):
set -euo pipefail
# -e: Exit on error
# -u: Exit on undefined variable
# -o pipefail: Pipeline fails if any command fails
Debug output:
# Add debug messages:
echo "DEBUG: variable value is $var" >&2
# Function for debug messages:
debug() {
if [[ "${DEBUG:-0}" == "1" ]]; then
echo "DEBUG: $*" >&2
fi
}
# Usage:
DEBUG=1 ./script.sh
Check syntax without running:
bash -n script.sh # Check for syntax errors
Conclusion
You now have a comprehensive guide to the Linux command line and Bash scripting, covering everything from basic navigation to advanced automation. The key to mastery is practice:
- Start simple: Use basic commands daily until they become second nature
- Build gradually: Add more complex techniques as you encounter real problems
- Automate repetitively: Turn repetitive tasks into scripts
- Read documentation: Use
manpages and--helpextensively - Experiment safely: Use test environments or directories for practice
Remember: the command line is a skill that compounds over time. Every technique you learn builds upon the last, and soon youโll find yourself crafting elegant one-liners that would have seemed impossible when you started.
Continue learning:
- Explore your systemโs man pages
- Read other usersโ scripts on GitHub
- Join Linux communities and forums
- Challenge yourself with command-line puzzles
- Build your own tools and utilities
The command line isnโt just a toolโitโs a superpower that makes you more productive, efficient, and capable. Master it, and youโll wonder how you ever worked without it.
NFS_Server
we need to setup into two part
- client side configuration
- server side configuration
Server Side Configuration: -
- To install nfs package
sudo apt install nfs-utils libnfsidmap - Enable and start nfs service
sudo systemctl enable rpcbind, nfs-serversudo systemctl start rpcbind, nfs-server, rpc-statd, nfs-idmap - Create a directory for nfs and give all the permission
mkdir -p $HOME/Desktop/NFS-Sharesudo chmod 777 ~/Desktop/NFS-Share - Modify the /etc/exports file and add new shared filesystem
/location <IP_allow>(rw,sync,no_root_squash)exportfs -rv
Client Side Configuration:-
- To install nfs package
sudo apt install nfs-utils rpcbind - Enable and start the rpcbind service
sudo systemctl start rpcbind - To stop the firewall
sudo systemctl stop firewall / iptable - show mount from nfs server
showmount -e <IP of server side> - Create a mount point (directory)
mkdir -p /mnt/share - Mount the NFS file system
mount <IP_server>:/location /mnt/share
Setting Up SSH Server Between PC and Server
This guide explains how to set up and configure an SSH server to enable secure communication between a client PC and a server.
Prerequisites
- A Linux-based PC (client) and server.
- SSH package installed on both machines.
- Network connectivity between the PC and the server.
Step-by-Step Instructions
Step 1: Install OpenSSH
On both the client and server, install the OpenSSH package:
On the Server:
sudo apt update
sudo apt install openssh-server
On the Client:
sudo apt update
sudo apt install openssh-client
Step 2: Start and Enable SSH Service
Ensure the SSH service is running on the server:
sudo systemctl start ssh
sudo systemctl enable ssh
Check the service status:
sudo systemctl status ssh
Step 3: Configure SSH on the Server
-
Open the SSH configuration file:
sudo nano /etc/ssh/sshd_config -
Modify or verify the following settings:
- PermitRootLogin: Set to
nofor security. - PasswordAuthentication: Set to
yesto allow password-based logins initially (you can disable it after setting up key-based authentication).
- PermitRootLogin: Set to
-
Save changes and restart the SSH service:
sudo systemctl restart ssh
Step 4: Determine the Serverโs IP Address
Find the serverโs IP address to connect from the client:
ip a
Look for the IP address under the active network interface (e.g., 192.168.x.x).
Step 5: Test SSH Connection from the Client
On the client, open a terminal and connect to the server using:
ssh username@server_ip
Replace username with the serverโs username and server_ip with the actual IP address.
Example:
ssh user@192.168.1.10
**Step 6: Set Up Key-Based Authentication
-
On the client, generate an SSH key pair:
ssh-keygen -t rsa -b 4096 -
Copy the public key to the server: on Linux
ssh-copy-id username@server_ipon Windows go to the .ssh folder
scp $env:USERPROFILE/.ssh/id_rsa.pub username@ip:~/.ssh/authorized_keys
-
Verify key-based login:
ssh username@server_ip -
Disable password-based logins for added security:
-
Edit the serverโs SSH configuration file:
sudo nano /etc/ssh/sshd_config -
Set
PasswordAuthenticationtono. -
Restart the SSH service:
sudo systemctl restart ssh
-
Step 7: Troubleshooting Common Issues
-
Firewall: Ensure SSH traffic is allowed through the firewall on the server:
sudo ufw allow ssh sudo ufw enable -
Connection Refused: Check if the SSH service is running and the correct IP address is used.
PostfixMail
Postfix Config lines
Add the following lines to /etc/postfix/main.cf
relayhost = [smtp.gmail.com]:587 myhostname= your_hostname
Location of sasl_passwd we saved smtp_sasl_password_maps = hash:/etc/postfix/sasl/sasl_passwd
Enables SASL authentication for postfix smtp_sasl_auth_enable = yes smtp_tls_security_level = encrypt
Disallow methods that allow anonymous authentication smtp_sasl_security_options = noanonymous
Create a file under /etc/postfix/sasl/
Filename: sasl_passwd
Add the below line [smtp.gmail.com]:587 email@gmail.com:password
change directory to cd /etc/postfix/sasl
change the ownership sudo chown root:root *
change the permission sudo chmod 600 *
Convert the sasl_passwd file into db file
postmap /etc/postfix/sasl/sasl_passwd
Start the postfix service
To send an email using Linux terminal
echo โTest Mailโ | mail -s โPostfix TESTโ paul@gmail.com
Linux Command for System Administrator
Basic command for system monitoring
sudo du -a / | sort -n -r | head -n 20# list disk usagejournalctl | grep "error"# log messages, kernel messages, and other system-related informationdmesg --ctime | grep error# Show kernel ring buffersudo journalctl -p 3 -xb# Check the lock filesudo systemctl --failed# Service that failed to loaddu -sh .config# File size of a specific directoryfind . -type f -exec grep -l "/dev/nvme0n1" {} +# find file and exce command with grep, {} is the output for each & +,\; is the terminator
Reset password (for forgotten password)
Reser Root
init=/bin/bash# from grub commnad mode, find the kernel line containing linux then add at the end of linux commandctrl+xorF10# to save the changes and bootmount -o remount,rw /# mount the root system for (r,w)passwd# change the passwordreboot -f# force reboot, after that you can boot with new passwd you set
Reser User
rw init=/bin/bash# from grub commnad mode, find the kernel line containing linux then add at the end of linux commandctrl+xorF10# to save the changes and bootpasswd username# change the passwordreboot -f# force reboot, after that you can boot with new passwd you set
Some useful Command
grep -Irl "akib" .# this will find file contain akib in the current dirgrep -A 3 -B 3 "nvme" flake.nix# this will find nvme in flake.nix file with before and after with 3,3 linesed -i "s/akib/withNewText/g" file.txt# sed will changes the all occurance with new text in a filecat /etc/passwd | column -t -s ":" -N USERNAME,PW,UID,GUID,COMMENT,HOME,INTERPRETER -J -n passwdFile# will seperate passwd file based on โ:โ delemeter and show as column format then the โ-Jโ flag will convert it into json filecat /etc/passwd | awk -F: 'BEGIN {printf "user\tPW\tUID\tGUID\tCOMMENT\tHOME\tINTERPRETER\n"} {printf "%s\t%s\t%s\t%s\t%s\t%s\t%s\n", $1, $2, $3, $4, $5, $6, $7}'cat /etc/passwd | column -t -s ":" -N USERNAME,PW,UID,GUID,COMMENT,HOME,INTERPRETER -H PW -O UID,USERNAME,GUID,COMMENT,HOME,INTERPRETER# โ-Hโ remove specific column โ-Oโ reorder column
Basic
๐ ๏ธ Common Ports & Protocols Cheat Sheet
A quick reference for well-known TCP/UDP ports and their usage. Useful for students, professionals, and anyone studying for certifications like CCNA, CompTIA, or Security+.
๐ Well-Known / System Ports (0 โ 1023)
| Port | Service | Protocol | Description |
|---|---|---|---|
| 7 | Echo | TCP, UDP | Echo service |
| 19 | CHARGEN | TCP, UDP | Character Generator Protocol (rarely used, vulnerable) |
| 20 | FTP-data | TCP, SCTP | File Transfer Protocol (data) |
| 21 | FTP | TCP, UDP, SCTP | File Transfer Protocol (control) |
| 22 | SSH/SCP/SFTP | TCP, UDP, SCTP | Secure Shell, secure logins, file transfers, port forwarding |
| 23 | Telnet | TCP | Unencrypted text communication |
| 25 | SMTP | TCP | Simple Mail Transfer Protocol (email routing) |
| 53 | DNS | TCP, UDP | Domain Name System |
| 67 | DHCP/BOOTP | UDP | DHCP Server |
| 68 | DHCP/BOOTP | UDP | DHCP Client |
| 69 | TFTP | UDP | Trivial File Transfer Protocol |
| 80 | HTTP | TCP, UDP, SCTP | Web traffic (HTTP/1.x, HTTP/2 over TCP; HTTP/3 uses QUIC/UDP) |
| 88 | Kerberos | TCP, UDP | Network authentication system |
| 110 | POP3 | TCP | Post Office Protocol (email retrieval) |
| 123 | NTP | UDP | Network Time Protocol |
| 135 | Microsoft RPC EPMAP | TCP, UDP | Remote Procedure Call Endpoint Mapper |
| 137-139 | NetBIOS | TCP, UDP | NetBIOS services (name service, datagram, session) |
| 143 | IMAP | TCP, UDP | Internet Message Access Protocol |
| 161-162 | SNMP | UDP | Simple Network Management Protocol (unencrypted) |
| 179 | BGP | TCP | Border Gateway Protocol |
| 389 | LDAP | TCP, UDP | Lightweight Directory Access Protocol |
| 443 | HTTPS | TCP, UDP, SCTP | Secure web traffic (SSL/TLS) |
| 445 | Microsoft DS SMB | TCP, UDP | File sharing, Active Directory |
| 465 | SMTPS | TCP | SMTP over SSL/TLS |
| 514 | Syslog | UDP | System log protocol |
| 520 | RIP | UDP | Routing Information Protocol |
| 546-547 | DHCPv6 | UDP | DHCP for IPv6 (client/server) |
| 636 | LDAPS | TCP, UDP | LDAP over SSL |
| 993 | IMAPS | TCP | IMAP over SSL/TLS |
| 995 | POP3S | TCP, UDP | POP3 over SSL/TLS |
๐ Registered Ports (1024 โ 49151)
| Port | Service | Protocol | Description |
|---|---|---|---|
| 1025 | Microsoft RPC | TCP | RPC service |
| 1080 | SOCKS proxy | TCP, UDP | Proxy protocol |
| 1194 | OpenVPN | TCP, UDP | VPN tunneling |
| 1433 | MS-SQL Server | TCP | Microsoft SQL Server |
| 1521 | Oracle DB | TCP | Oracle Database listener |
| 1701 | L2TP | TCP | Layer 2 Tunneling Protocol |
| 1720 | H.323 | TCP | VoIP signaling |
| 1723 | PPTP | TCP, UDP | VPN protocol (deprecated) |
| 1812-1813 | RADIUS | UDP | Authentication, accounting |
| 2049 | NFS | UDP | Network File System |
| 2082-2083 | cPanel | TCP, UDP | Web hosting control panel |
| 2222 | DirectAdmin | TCP | Hosting control panel |
| 2483-2484 | Oracle DB | TCP, UDP | Insecure & SSL listener |
| 3074 | Xbox Live | TCP, UDP | Online gaming |
| 3128 | HTTP Proxy | TCP | Common proxy port |
| 3260 | iSCSI Target | TCP, UDP | Storage protocol |
| 3306 | MySQL | TCP | Database system |
| 3389 | RDP | TCP | Windows Remote Desktop |
| 3690 | SVN | TCP, UDP | Apache Subversion |
| 3724 | World of Warcraft | TCP, UDP | Gaming |
| 4333 | mSQL | TCP | Mini SQL |
| 4444 | Blaster Worm | TCP, UDP | Malware |
| 5000 | UPnP | TCP | Universal Plug & Play |
| 5060-5061 | SIP | TCP, UDP | Session Initiation Protocol (VoIP) |
| 5222-5223 | XMPP | TCP, UDP | Messaging protocol |
| 5432 | PostgreSQL | TCP | Database system |
| 5900-5999 | VNC | TCP, UDP | Remote desktop (VNC) |
| 6379 | Redis | TCP | In-memory database |
| 6665-6669 | IRC | TCP | Internet Relay Chat |
| 6881-6999 | BitTorrent | TCP, UDP | File sharing |
| 8080 | HTTP Proxy/Alt | TCP | Alternate web port |
| 8443 | HTTPS Alt | TCP | Alternate secure web port |
| 9042 | Cassandra | TCP | NoSQL database |
| 9100 | Printer (PDL) | TCP | Print Data Stream |
๐ Dynamic / Private Ports (49152 โ 65535)
These are used for ephemeral connections and custom apps. Safe to use for internal development/testing.
๐ฏ Most Common Ports for Exams
If youโre preparing for CCNA / CompTIA exams, focus on these:
| Port | Service |
|---|---|
| 7 | Echo |
| 20, 21 | FTP |
| 22 | SSH/SCP |
| 23 | Telnet |
| 25 | SMTP |
| 53 | DNS |
| 67, 68 | DHCP |
| 69 | TFTP |
| 80 | HTTP |
| 88 | Kerberos |
| 110 | POP3 |
| 123 | NTP |
| 137-139 | NetBIOS |
| 143 | IMAP |
| 161, 162 | SNMP |
| 389 | LDAP |
| 443 | HTTPS |
| 445 | SMB |
| 636 | LDAPS |
| 3389 | RDP |
| 5060-5061 | SIP (VoIP) |
โ Conclusion
Familiarity with ports & protocols is essential for:
- Building secure applications
- Troubleshooting network issues
- Passing certification exams
Keep this cheat sheet handy as a quick reference!
IPv4 Subnetting Cheat Sheet
Subnetting is one of the most fundamental yet challenging concepts in networking. This cheat sheet provides quick references to help you master IPv4 subnetting for certifications, administration, and network design.
IPv4 Subnets
Subnetting allows a host to determine if the destination machine is local or remote. The subnet mask determines how many IPv4 addresses are assignable within a network.
| CIDR | Subnet Mask | # of Addresses | Wildcard |
|---|---|---|---|
| /32 | 255.255.255.255 | 1 | 0.0.0.0 |
| /31 | 255.255.255.254 | 2 | 0.0.0.1 |
| /30 | 255.255.255.252 | 4 | 0.0.0.3 |
| /29 | 255.255.255.248 | 8 | 0.0.0.7 |
| /28 | 255.255.255.240 | 16 | 0.0.0.15 |
| /27 | 255.255.255.224 | 32 | 0.0.0.31 |
| /26 | 255.255.255.192 | 64 | 0.0.0.63 |
| /25 | 255.255.255.128 | 128 | 0.0.0.127 |
| /24 | 255.255.255.0 | 256 | 0.0.0.255 |
| /23 | 255.255.254.0 | 512 | 0.0.1.255 |
| /22 | 255.255.252.0 | 1024 | 0.0.3.255 |
| /21 | 255.255.248.0 | 2,048 | 0.0.7.255 |
| /20 | 255.255.240.0 | 4,096 | 0.0.15.255 |
| /19 | 255.255.224.0 | 8,192 | 0.0.31.255 |
| /18 | 255.255.192.0 | 16,384 | 0.0.63.255 |
| /17 | 255.255.128.0 | 32,768 | 0.0.127.255 |
| /16 | 255.255.0.0 | 65,536 | 0.0.255.255 |
| /15 | 255.254.0.0 | 131,072 | 0.1.255.255 |
| /14 | 255.252.0.0 | 262,144 | 0.3.255.255 |
| /13 | 255.248.0.0 | 524,288 | 0.7.255.255 |
| /12 | 255.240.0.0 | 1,048,576 | 0.15.255.255 |
| /11 | 255.224.0.0 | 2,097,152 | 0.31.255.255 |
| /10 | 255.192.0.0 | 4,194,304 | 0.63.255.255 |
| /9 | 255.128.0.0 | 8,388,608 | 0.127.255.255 |
| /8 | 255.0.0.0 | 16,777,216 | 0.255.255.255 |
| /7 | 254.0.0.0 | 33,554,432 | 1.255.255.255 |
| /6 | 252.0.0.0 | 67,108,864 | 3.255.255.255 |
| /5 | 248.0.0.0 | 134,217,728 | 7.255.255.255 |
| /4 | 240.0.0.0 | 268,435,456 | 15.255.255.255 |
| /3 | 224.0.0.0 | 536,870,912 | 31.255.255.255 |
| /2 | 192.0.0.0 | 1,073,741,824 | 63.255.255.255 |
| /1 | 128.0.0.0 | 2,147,483,648 | 127.255.255.255 |
| /0 | 0.0.0.0 | 4,294,967,296 | 255.255.255.255 |
Decimal to Binary Conversion
IPv4 addresses are actually 32-bit binary numbers. Subnet masks in binary show which part is the network and which part is the host.
| Subnet Mask | Binary | Wildcard | Binary Wildcard |
|---|---|---|---|
| 255 | 1111 1111 | 0 | 0000 0000 |
| 254 | 1111 1110 | 1 | 0000 0001 |
| 252 | 1111 1100 | 3 | 0000 0011 |
| 248 | 1111 1000 | 7 | 0000 0111 |
| 240 | 1111 0000 | 15 | 0000 1111 |
| 224 | 1110 0000 | 31 | 0001 1111 |
| 192 | 1100 0000 | 63 | 0011 1111 |
| 128 | 1000 0000 | 127 | 0111 1111 |
| 0 | 0000 0000 | 255 | 1111 1111 |
Why Learn Binary?
1= Network portion0= Host portion- Subnet masks must have all ones followed by all zeros.
Example: A /24 (255.255.255.0) subnet reserves 24 bits for network and 8 bits for hosts โ 254 usable IPs.
/28 Example: If ISP gives 199.44.6.80/28, you calculate host addresses by binary increments โ usable range = .81 - .94.
IPv4 Address Classes
| Class | Range |
|---|---|
| A | 0.0.0.0 โ 127.255.255.255 |
| B | 128.0.0.0 โ 191.255.255.255 |
| C | 192.0.0.0 โ 223.255.255.255 |
| D | 224.0.0.0 โ 239.255.255.255 |
| E | 240.0.0.0 โ 255.255.255.255 |
Reserved (Private) Ranges
| Range Type | IP Range |
|---|---|
| Class A | 10.0.0.0 โ 10.255.255.255 |
| Class B | 172.16.0.0 โ 172.31.255.255 |
| Class C | 192.168.0.0 โ 192.168.255.255 |
| Localhost | 127.0.0.0 โ 127.255.255.255 |
| Zeroconf (APIPA) | 169.254.0.0 โ 169.254.255.255 |
Key Terminology
- Wildcard Mask: Indicates available address bits for matching.
- CIDR: Classless Inter-Domain Routing, uses
/XXnotation. - Network Portion: Fixed part of IP determined by subnet mask.
- Host Portion: Variable part of IP usable for devices.
Conclusion
IPv4 subnetting can seem complex, but with practice and binary understanding, it becomes second nature. Keep this sheet handy for quick reference during exams, troubleshooting, or design work.
Tools
curl Cheat Sheet
Quick: Practical, consultant-style reference for using
curlโ from basic GETs to file uploads, API interactions, cookies, scripting tips and advanced flags. Friendly tone, focused on getting you productive fast.
Table of contents
- What is curl?
- Quick examples โ Web browsing & headers
- Downloading files
- GET requests
- POST requests & forms
- API interaction & headers
- File uploads with
--form/-F - Cookies and sessions
- Scripting with curl
- Advanced & debugging flags
- Partial downloads & ranges
- Helpful one-line examples
- Etiquette & safety note
What is curl?
curl (client URL) is a command-line tool for transferring data with URL syntax. It supports many protocols (HTTP/S, FTP, SCP, SMTP, IMAP, POP3, etc.) and is ideal for quick checks, automation, scripting, API calls, and sometimes creative (or mischievous) automation.
Use it when you need protocol-level control from the terminal.
Quick examples โ Web browsing & headers
| Command | Description |
|---|---|
curl http://example.com | Print HTML body of http://example.com to stdout |
curl --list-only "http://example.com/dir/" (-l) | List directory contents (if server allows) |
curl --location URL (-L) | Follow 3xx redirects |
curl --head URL (-I) | Fetch HTTP response headers only |
curl --head --show-error URL | Headers and errors (helpful for down/unresponsive hosts) |
Downloading files
| Command | Notes |
|---|---|
curl --output hello.html http://example.com (-o) | Save output to hello.html |
curl --remote-name URL (-O) | Save file using remote filename |
curl --remote-name URL --output newname | Download then rename locally |
curl --remote-name --continue-at - URL | Resume partial download (if server supports ranges) |
curl "https://site/{a,b,c}.html" --output "file_#1.html" | Download multiple variants using brace expansion and # placeholders |
Batch download pattern (extract links then download):
curl -L http://example.com/list/ | grep '\.mp4' | cut -d '"' -f 8 | while read i; do curl http://example.com/${i} -O; done
(Adjust grep/cut to the page structure.)
GET requests
| Command | Description |
|---|---|
curl --request GET "http://example.com" (-X GET) | Explicit GET request (usually optional) |
curl -s -w '%{remote_ip} %{time_total} %{http_code}\n' -o /dev/null URL | Silent mode with custom output: IP, total time, HTTP code |
Example: fetch a JSON API (may require headers or tokens):
curl -X GET 'https://api.example.com/items?filter=all' -H 'Accept: application/json'
POST requests & forms
| Command | Description |
|---|---|
curl --request POST URL -d 'key=value' (-X POST -d) | Send URL-encoded data in the request body |
curl -H 'Content-Type: application/json' --data-raw '{"k":"v"}' URL | Send raw JSON payload (set content-type) |
Examples with MongoDB Data API (illustrative):
# Insert document
curl --request POST 'https://data.mongodb-api/.../insertOne' \
--header 'Content-Type: application/json' \
--header 'api-key: YOUR_KEY' \
--data-raw '{"dataSource":"Cluster0","database":"db","collection":"c","document":{ "name":"Alice" }}'
# Find one
curl --request POST 'https://data.mongodb-api/.../findOne' \
--header 'Content-Type: application/json' \
--header 'api-key: YOUR_KEY' \
--data-raw '{"filter":{"name":"Alice"}}'
API interaction & headers
| Command | Description |
|---|---|
-H / --header | Add custom HTTP header (Auth tokens, Content-Type, Accept, etc.) |
curl --header "Auth-Token:$TOKEN" URL | Pass bearer or custom tokens in headers |
curl --user username:password URL | Basic auth (-u username:password) |
Examples:
curl -H 'Authorization: Bearer $TOKEN' -H 'Accept: application/json' https://api.example.com/me
curl -u 'user:password' 'https://example.com/protected'
File uploads with --form / -F
Use -F to emulate HTML form file uploads (multipart/form-data).
| Command | Description |
|---|---|
curl --form "file=@/path/to/file" URL | Upload file (use @ for relative or @/abs/path) |
curl --form "field=value" --form "file=@/path" URL | Mix fields and files in one request |
Notes:
- If the file is in the current directory, you can use
@filename. - If you supply an absolute path, omit the
@and passfield=/full/path.
Examples:
curl -F "email=test@me.com" -F "submit=Submit" 'https://docs.google.com/forms/d/e/FORM_ID/formResponse' > output.html
curl -F "entry.123456789=@/Users/me/pic.jpg" 'https://example.com/upload' > response.html
Cookies and sessions
| Command | Description |
|---|---|
curl --cookie "name=val;name2=val2" URL (-b) | Send cookie(s) inline |
curl --cookie cookies.txt URL | Load cookies from file (cookies.txt with k=v;... format) |
curl --cookie-jar mycookies.txt URL (-c) | Save cookies received into mycookies.txt |
curl --dump-header headers.txt URL (-D) | Dump response headers (includes Set-Cookie) |
Cookie file format (simple):
key1=value1;key2=value2
Scripting with curl
curl is a natural fit for bash automation. Example script patterns:
- Reusable function wrapper for API calls (add auth header once)
- Download + checksum verification loop
- Rate-limited loops for polite scraping (
sleepbetween requests)
Example: simple reusable function
api_get(){
local endpoint="$1"
curl -s -H "Authorization: Bearer $API_KEY" "https://api.example.com/${endpoint}"
}
api_get "items"
Advanced & debugging flags
| Flag | Purpose |
|---|---|
-h | Show help |
--version | Show curl version and features |
-v | Verbose (request/response) |
--trace filename | Detailed trace of operations and data |
-s | Silent mode (no progress meter) |
-S | Show error when used with -s |
-L | Follow redirects |
--connect-timeout | Seconds to wait for TCP connect |
-m / --max-time | Max operation time in seconds |
-w / --write-out | Print variables after completion (%{http_code}, %{time_total}, %{remote_ip}, etc.) |
Examples:
curl -v https://example.com
curl --trace trace.txt https://twitter.com/
curl -s -w '%{remote_ip} %{time_total} %{http_code}\n' -o /dev/null http://ankush.io
curl -L 'https://short.url' --connect-timeout 0.1
Partial downloads & ranges
Use -r to request byte ranges from HTTP/FTP responses (helpful for resuming or grabbing file snippets).
| Command | Notes |
|---|---|
curl -r 0-99 http://example.com | First 100 bytes |
curl -r -500 http://example.com | Last 500 bytes |
curl -r 0-99 ftp://ftp.example.com | Ranges on FTP (explicit start/end required) |
Helpful one-line examples
# Show headers only
curl -I https://example.com
# Save response to file quietly
curl -sL https://example.com -o page.html
# POST JSON and pretty-print reply (using jq)
curl -s -H "Content-Type: application/json" -d '{"name":"A"}' https://api.example.com/insert | jq
# Upload file with field name "file"
curl -F "file=@./image.jpg" https://api.example.com/upload
# Send cookies from file and save response headers
curl -b cookies.txt -D headers.txt https://example.com
# Send URL-encoded form field
curl -d "field1=value1&field2=value2" -X POST https://form-endpoint
Request example (SMS via textbelt โ use responsibly)
curl -X POST https://textbelt.com/text \
--data-urlencode phone='+[E.164 number]' \
--data-urlencode message='Please delete this message.' \
-d key=textbelt
Response example: {"success":true,...} (service-dependent)
Etiquette & safety note
- Only target servers or forms you own or have explicit permission to test. Abuse (flooding, unauthorized automation, fraud) is illegal and unethical.
- Prefer
--connect-timeoutand rate-limiting in scripts to avoid hammering servers. - Keep secrets out of command history โ use environment variables or
--netrcwhere appropriate.
Nmap Cheat Sheet
Quick: A concise, practical reference for common Nmap workflows โ target selection, scan types, discovery, NSE usage, output handling, evasion tricks, and useful one-liners. Designed like a consultantโs quick-reference: organized by category so you can scan and apply fast.
Table of contents
- Overview & Usage Tips
- Target Specification
- Scan Techniques
- Host Discovery
- Port Specification
- Service & Version Detection
- OS Detection
- Timing & Performance
- Timing Tunables
- NSE (Nmap Scripting Engine)
- Useful NSE Examples
- Firewall / IDS Evasion & Spoofing
- Output Formats & Options
- Helpful Output Examples & Pipelines
- Miscellaneous Flags & Other Commands
- Practical Tips & Etiquette
Overview & Usage Tips
- Run Nmap as root (or with sudo) for the most feature-complete scans (e.g., SYN
-sS, raw packets, OS detection). - Start with discovery (
-sn) and light scans (-T3 -F -sV) to find live hosts before aggressive options. - Log results (
-oA) so you can re-analyze and resume scans later. - Respect scope & permissions โ scanning networks you donโt own can be illegal.
Target Specification
Define which IPs/ranges/subnets Nmap should scan.
| Switch / Syntax | Example | Description |
|---|---|---|
| Single IP | nmap 192.168.1.1 | Scan a single host |
| Multiple IPs | nmap 192.168.1.1 192.168.2.1 | Scan specific hosts |
| Range | nmap 192.168.1.1-254 | Scan an IP range |
| Domain | nmap scanme.nmap.org | Scan a hostname |
| CIDR | nmap 192.168.1.0/24 | CIDR subnet scan |
-iL | nmap -iL targets.txt | Read targets from file |
-iR | nmap -iR 100 | Scan 100 random hosts |
--exclude | nmap --exclude 192.168.1.1 | Exclude host(s) from scan |
Nmap Scan Techniques
Pick based on stealth, permissions, and speed.
| Switch | Example | Description |
|---|---|---|
-sS | nmap 192.168.1.1 -sS | TCP SYN scan (stealthy; default with privileges) |
-sT | nmap 192.168.1.1 -sT | TCP connect() scan (no raw socket required) |
-sU | nmap 192.168.1.1 -sU | UDP scan |
-sA | nmap 192.168.1.1 -sA | ACK scan (firewall mapping) |
-sW | nmap 192.168.1.1 -sW | Window scan |
-sM | nmap 192.168.1.1 -sM | Maimon scan |
-A | nmap 192.168.1.1 -A | Aggressive โ OS, version, scripts, traceroute |
Host Discovery
Find out which hosts are up before scanning ports or when skipping port scans.
| Switch | Example | Description |
|---|---|---|
-sL | nmap 192.168.1.1-3 -sL | List scan โ do not send probes (target listing only) |
-sn | nmap 192.168.1.1/24 -sn | Ping / host discovery only (no port scan) |
-Pn | nmap 192.168.1.1-5 -Pn | Skip host discovery (treat all hosts as up) |
-PS | nmap 192.168.1.1-5 -PS22-25,80 | TCP SYN discovery on specified ports (80 default) |
-PA | nmap 192.168.1.1-5 -PA22-25,80 | TCP ACK discovery on specified ports (80 default) |
-PU | nmap 192.168.1.1-5 -PU53 | UDP discovery on specified ports (40125 default) |
-PR | nmap 192.168.1.0/24 -PR | ARP discovery (local nets only) |
-n | nmap 192.168.1.1 -n | Never perform DNS resolution |
Port Specification
Target specific ports, ranges, or mixed TCP/UDP sets.
| Switch | Example | Description |
|---|---|---|
-p | nmap 192.168.1.1 -p 21 | Scan single port |
-p | nmap 192.168.1.1 -p 21-100 | Scan port range |
-p | nmap 192.168.1.1 -p U:53,T:21-25,80 | Mix UDP and TCP ports |
-p- | nmap 192.168.1.1 -p- | Scan all TCP ports (1โ65535) |
| Service names | nmap 192.168.1.1 -p http,https | Use service names instead of numbers |
-F | nmap 192.168.1.1 -F | Fast scan โ top 100 ports |
--top-ports | nmap 192.168.1.1 --top-ports 2000 | Scan top N ports by frequency |
-p0- / -p-65535 | nmap 192.168.1.1 -p0- | Open-ended ranges; -p0- will scan from 0 to 65535 |
Service & Version Detection
Try to identify the service and its version running on discovered ports.
| Switch | Example | Description |
|---|---|---|
-sV | nmap 192.168.1.1 -sV | Service/version detection |
-sV --version-intensity | nmap 192.168.1.1 -sV --version-intensity 8 | Intensity 0โ9. Higher = more probing |
--version-light | nmap 192.168.1.1 -sV --version-light | Lighter/faster detection (less reliable) |
--version-all | nmap 192.168.1.1 -sV --version-all | Full (intensity 9) detection |
-A | nmap 192.168.1.1 -A | Includes -sV, OS detection, NSE scripts, traceroute |
OS Detection
Fingerprint the target TCP/IP stack to guess the OS.
| Switch | Example | Description |
|---|---|---|
-O | nmap 192.168.1.1 -O | Remote OS detection (TCP/IP fingerprinting) |
--osscan-limit | nmap 192.168.1.1 -O --osscan-limit | Skip OS detection unless ports show open/closed pattern |
--osscan-guess | nmap 192.168.1.1 -O --osscan-guess | Be more aggressive about guesses |
--max-os-tries | nmap 192.168.1.1 -O --max-os-tries 1 | Limit how many OS probe attempts are made |
-A | nmap 192.168.1.1 -A | OS detection included with -A |
Timing & Performance
Built-in timing templates trade off speed vs stealth.
| Switch | Example | Description |
|---|---|---|
-T0 | nmap 192.168.1.1 -T0 | Paranoid โ max IDS evasion (very slow) |
-T1 | nmap 192.168.1.1 -T1 | Sneaky โ IDS evasion |
-T2 | nmap 192.168.1.1 -T2 | Polite โ reduce bandwidth/CPU usage |
-T3 | nmap 192.168.1.1 -T3 | Normal (default) |
-T4 | nmap 192.168.1.1 -T4 | Aggressive โ faster but noisier |
-T5 | nmap 192.168.1.1 -T5 | Insane โ assumes very fast, reliable network |
Timing Tunables (Fine Control)
Adjust timeouts, parallelism, rates and retries.
--host-timeout <time>โ give up on a host after this time (e.g.,--host-timeout 2m).--min-rtt-timeout,--max-rtt-timeout,--initial-rtt-timeout <time>โ control probe RTT timeouts.--min-hostgroup,--max-hostgroup <size>โ group size for parallel host scanning.--min-parallelism,--max-parallelism <num>โ probe parallelization controls.--max-retries <tries>โ maximum retransmissions.--min-rate <n>/--max-rate <n>โ packet send rate bounds.
Examples:
nmap --host-timeout 4m --max-retries 2 192.168.1.1
nmap --min-rate 100 --max-rate 1000 -p- 192.168.1.0/24
NSE (Nmap Scripting Engine)
Use scripts to automate checks, fingerprinting, vulnerability discovery and enumeration.
| Switch | Example | Notes |
|---|---|---|
-sC | nmap 192.168.1.1 -sC | Run default safe scripts (convenient discovery) |
--script | nmap 192.168.1.1 --script http* | Run scripts by name or wildcard |
--script <script1>,<script2> | nmap --script banner,http-title | Run specific scripts |
--script-args | nmap --script snmp-sysdescr --script-args snmpcommunity=public | Provide args to scripts |
--script "not intrusive" | nmap --script "default and not intrusive" | Compose script sets (example) |
Useful NSE Examples
A few practical one-liners to keep handy.
# Generate sitemap from web server (HTTP):
nmap -Pn --script=http-sitemap-generator scanme.nmap.org
# Fast random search for web servers:
nmap -n -Pn -p 80 --open -sV -vvv --script banner,http-title -iR 1000
# Brute-force DNS hostnames (subdomain guessing):
nmap -Pn --script=dns-brute domain.com
# Safe SMB enumeration (useful on internal networks):
nmap -n -Pn -vv -O -sV --script smb-enum*,smb-ls,smb-mbenum,smb-os-discovery,smb-vuln* 192.168.1.1
# Whois queries via scripts:
nmap --script whois* domain.com
# Detect XSS-style unsafe output escaping on HTTP port 80:
nmap -p80 --script http-unsafe-output-escaping scanme.nmap.org
# Check for SQL injection (scripted):
nmap -p80 --script http-sql-injection scanme.nmap.org
Firewall / IDS Evasion & Spoofing
Techniques to make traffic less obvious. Use responsibly.
| Switch | Example | Description |
|---|---|---|
-f | nmap 192.168.1.1 -f | Fragment packets (can evade some filters) |
--mtu | nmap 192.168.1.1 --mtu 32 | Set MTU/fragment size |
-D | nmap -D decoy1,decoy2,ME,decoy3 target | Decoy IP addresses to confuse observers |
-S | nmap -S 1.2.3.4 target | Spoof source IP (may require raw sockets) |
-g | nmap -g 53 target | Set source port (useful to bypass simple filters) |
--proxies | nmap --proxies http://192.168.1.1:8080 target | Relay scans through HTTP/SOCKS proxies |
--data-length | nmap --data-length 200 target | Append random data to packets |
Example IDS evasion command
nmap -f -T0 -n -Pn --data-length 200 -D 192.168.1.101,192.168.1.102,192.168.1.103,192.168.1.23 192.168.1.1
Output Formats & Options
Save scans so you can analyze later or process programmatically.
| Switch | Example | Description |
|---|---|---|
-oN | nmap 192.168.1.1 -oN normal.file | Normal human-readable output file |
-oX | nmap 192.168.1.1 -oX xml.file | XML output (good for parsing) |
-oG | nmap 192.168.1.1 -oG grep.file | Grepable output (legacy) |
-oA | nmap 192.168.1.1 -oA results | Write results.nmap, results.xml, results.gnmap |
-oG - | nmap 192.168.1.1 -oG - | Print grepable to stdout |
--append-output | nmap -oN file -append-output | Append to an existing file |
-v / -vv | nmap -v | Increase verbosity |
-d / -dd | nmap -d | Increase debugging info |
--reason | nmap --reason | Show reason a port state was classified |
--open | nmap --open | Show only open or possibly-open ports |
--packet-trace | nmap --packet-trace | Show raw packet send/receive detail |
--iflist | nmap --iflist | List interfaces and routes |
--resume | nmap --resume results.file | Resume an interrupted scan (requires prior save) |
Helpful Output Examples & Pipelines
Combine Nmap with standard UNIX tools to extract actionable info.
# Find web servers (HTTP):
nmap -p80 -sV -oG - --open 192.168.1.0/24 | grep open
# Generate list of live hosts from random scan (XML -> grep -> cut):
nmap -iR 10 -n -oX out.xml | grep "Nmap" | cut -d " " -f5 > live-hosts.txt
# Append hosts from second scan:
nmap -iR 10 -n -oX out2.xml | grep "Nmap" | cut -d " " -f5 >> live-hosts.txt
# Compare two scans:
ndiff scan1.xml scan2.xml
# Convert XML to HTML:
xsltproc nmap.xml -o nmap.html
# Frequency of open ports (clean and aggregate):
grep " open " results.nmap | sed -r 's/ +/ /g' | sort | uniq -c | sort -rn | less
Miscellaneous Flags
| Switch | Example | Description |
|---|---|---|
-6 | nmap -6 2607:f0d0:1002:51::4 | Enable IPv6 scanning |
-h | nmap -h | Show help screen |
Other Useful Commands (Mixed Examples)
# Discovery only on specific TCP ports, no port scan:
nmap -iR 10 -PS22-25,80,113,1050,35000 -v -sn
# ARP-only discovery on local net, verbose, no port scan:
nmap 192.168.1.0/24 -PR -sn -vv
# Traceroute to random targets (no ports):
nmap -iR 10 -sn --traceroute
# List targets only but use internal DNS server:
nmap 192.168.1.1-50 -sL --dns-server 192.168.1.1
# Show packet details during scan:
nmap 192.168.1.1 --packet-trace
Practical Tips & Etiquette
- Always have written permission to scan networks you do not own.
- Start small: discovery -> targeted port scan -> version detection -> scripts.
- Use
--scriptcarefully; some scripts are intrusive. - Keep a log of what you scanned and when (timestamps help with audits).
- For large networks, break scans into chunks and use
--min-rate/--max-rateto control load.
Appendix โ Quick Command Generator (Examples)
nmap -sS -p 1-100 -T4 -oA quick-scan 192.168.1.0/24โ fast SYN scan of top 100 ports, save outputs.nmap -Pn -sV --script=vuln -oX vuln-check.xml 10.0.0.5โ skip host discovery, version & vulnerability scripts.
SSH Cheat Sheet
Whether you need a quick recap of SSH commands or youโre learning SSH from scratch, this guide will help. SSH is a must-have tool for network administrators and anyone who needs to log in to remote systems securely.
๐ What Is SSH?
SSH (Secure Shell / Secure Socket Shell) is a network protocol that allows secure access to network services over unsecured networks.
Key tools included in the suite:
- ssh-keygen โ Create SSH authentication key pairs.
- scp (Secure Copy Protocol) โ Copy files securely between hosts.
- sftp (Secure File Transfer Protocol) โ Securely send/receive files.
By default, an SSH server listens on TCP port 22.
๐ Basic SSH Commands
| Command | Description |
|---|---|
ssh user@host | Connect to remote server |
ssh pi@raspberry | Connect as pi on default port 22 |
ssh pi@raspberry -p 3344 | Connect on custom port 3344 |
ssh -i /path/file.pem admin@192.168.1.1 | Connect using private key file |
ssh root@192.168.2.2 'ls -l' | Execute remote command |
ssh user@192.168.3.3 bash < script.sh | Run script remotely |
ssh friend@Best.local "tar cvzf - ~/ffmpeg" > output.tgz | Download compressed directory |
๐ Key Management
| Command | Description |
|---|---|
ssh-keygen | Generate SSH keys |
ssh-keygen -F [host] | Find entry in known_hosts |
ssh-keygen -R [host] | Remove entry from known_hosts |
ssh-keygen -y -f private.key > public.pub | Generate public key from private |
ssh-keygen -t rsa -b 4096 -C "email@example.com" | Generate new RSA 4096-bit key |
๐ File Transfers
SCP (Secure Copy)
| Command | Description |
|---|---|
scp user@server:/file dest/ | Copy remote โ local |
scp file user@server:/path | Copy local โ remote |
scp user1@server1:/file user2@server2:/path | Copy between two servers |
scp -r user@server:/folder dest/ | Copy directory recursively |
scp -P 8080 file user@server:/path | Connect on port 8080 |
scp -C | Enable compression |
scp -v | Verbose output |
SFTP (Secure File Transfer)
| Command | Description |
|---|---|
sftp user@server | Connect to server via SFTP |
sftp -P 8080 user@server | Connect on port 8080 |
sftp -r dir user@server:/path | Recursively transfer directory |
โ๏ธ SSH Configurations & Options
| Command | Description |
|---|---|
man ssh_config | SSH client configuration manual |
cat /etc/ssh/ssh_config | View system-wide SSH client config |
cat /etc/ssh/sshd_config | View system-wide SSH server config |
cat ~/.ssh/config | View user-specific config |
cat ~/.ssh/known_hosts | View logged-in hosts |
SSH Agent & Keys
| Command | Description |
|---|---|
ssh-agent | Start agent to hold private keys |
ssh-add ~/.ssh/id_rsa | Add key to agent |
ssh-add -l | List cached keys |
ssh-add -D | Delete all cached keys |
ssh-copy-id user@server | Copy keys to remote server |
๐ฅ๏ธ Remote Server Management
After logging into a remote server:
cdโ Change directorylsโ List filesmkdirโ Create directorymvโ Move/rename filesnano/vimโ Edit filespsโ List processeskillโ Stop processtopโ Monitor resourcesexitโ Close SSH session
๐ Advanced SSH Commands
X11 Forwarding (GUI Apps over SSH)
-
Client
~/.ssh/config:Host * ForwardAgent yes ForwardX11 yes -
Server
/etc/ssh/sshd_config:X11Forwarding yes X11DisplayOffset 10 X11UseLocalhost no
| Command | Description |
|---|---|
sshfs user@server:/path /local/mount | Mount remote filesystem locally |
ssh -C user@host | Enable compression |
ssh -X user@server | Enable X11 forwarding |
ssh -Y user@server | Enable trusted X11 forwarding |
๐ SSH Tunneling
Local Port Forwarding -L
ssh -L local_port:destination:remote_port user@server
Example: ssh -L 2222:10.0.1.5:3333 root@192.168.0.1
Remote Port Forwarding -R
ssh -R remote_port:destination:destination_port user@server
Example: ssh -R 8080:192.168.3.8:3030 -N -f user@remote.host
Dynamic Port Forwarding -D (SOCKS Proxy)
ssh -D 6677 -q -C -N -f user@host
ProxyJump -J (Bastion Host)
ssh -J user@proxy_host user@target_host
๐ก๏ธ Security Best Practices
- Disable unused features:
AllowTcpForwarding no,X11Forwarding no. - Change default port from
22to something else. - Use SSH certificates with
ssh-keygen. - Restrict logins with
AllowUsersinsshd_config. - Use bastion hosts for added security.
โ Conclusion
This cheat sheet covered:
- Basic SSH connections
- File transfers (SCP/SFTP)
- Key management & configs
- Remote management commands
- Advanced tunneling & forwarding
SSH remains an indispensable tool for IT professionals and security practitioners.
Wireshark Cheat Sheet
Wireshark is one of the most popular and powerful tools for capturing, analyzing, and troubleshooting network traffic.
Whether you are a network administrator, security professional, or just someone curious about how networks work, learning Wireshark is a valuable skill. This cheat sheet serves as a quick reference for filters, commands, shortcuts, and syntax.
๐ Default Columns in Packet Capture
| Name | Description |
|---|---|
| No. | Frame number from the beginning of the packet capture |
| Time | Seconds from the first frame |
| Source (src) | Source address (IPv4, IPv6, or Ethernet) |
| Destination (dst) | Destination address |
| Protocol | Protocol in Ethernet/IP/TCP segment |
| Length | Frame length in bytes |
๐ Logical Operators
| Operator | Description | Example | ||
|---|---|---|---|---|
and / && | Logical AND | All conditions must match | ||
or / ` | ` | Logical OR | At least one condition matches | |
xor / ^^ | Logical XOR | Only one of two conditions matches | ||
not / ! | Negation | Exclude packets | ||
[n] [ ... ] | Substring operator | Match specific text |
๐ฏ Filtering Packets (Display Filters)
| Operator | Description | Example |
|---|---|---|
eq / == | Equal | ip.dest == 192.168.1.1 |
ne / != | Not equal | ip.dest != 192.168.1.1 |
gt / > | Greater than | frame.len > 10 |
lt / < | Less than | frame.len < 10 |
ge / >= | Greater or equal | frame.len >= 10 |
le / <= | Less or equal | frame.len <= 10 |
๐งฉ Filter Types
| Name | Description |
|---|---|
| Capture filter | Applied during capture |
| Display filter | Applied to hide/show after capture |
๐ก Capturing Modes
| Mode | Description |
|---|---|
| Promiscuous mode | Capture all packets on the segment |
| Monitor mode | Capture all wireless traffic (Linux/Unix only) |
โก Miscellaneous
- Slice Operator โ
[ ... ](range) - Membership Operator โ
{}(in) - Ctrl+E โ Start/Stop capturing
๐ Capture Filter Syntax
Example:
tcp src 192.168.1.1 and tcp dst 202.164.30.1
๐จ Display Filter Syntax
Example:
http and ip.dst == 192.168.1.1 and tcp.port
โจ๏ธ Keyboard Shortcuts (Main Window)
| Shortcut | Action |
|---|---|
Tab / Shift+Tab | Move between UI elements |
โ / โ | Move between packets/details |
Ctrl+โ / F8 | Next packet (even if unfocused) |
Ctrl+โ / F7 | Previous packet |
Ctrl+. | Next packet in conversation |
Ctrl+, | Previous packet in conversation |
Return / Enter | Toggle tree item |
Backspace | Jump to parent node |
๐ Protocol Values
ether, fddi, ip, arp, rarp, decnet, lat, sca, moprc, mopdl, tcp, udp
๐ Common Filtering Commands
| Usage | Syntax |
|---|---|
| Filter by IP | ip.addr == 10.10.50.1 |
| Destination IP | ip.dest == 10.10.50.1 |
| Source IP | ip.src == 10.10.50.1 |
| IP range | ip.addr >= 10.10.50.1 and ip.addr <= 10.10.50.100 |
| Multiple IPs | ip.addr == 10.10.50.1 and ip.addr == 10.10.50.100 |
| Exclude IP | !(ip.addr == 10.10.50.1) |
| Subnet | ip.addr == 10.10.50.1/24 |
| Port | tcp.port == 25 |
| Destination port | tcp.dstport == 23 |
| IP + Port | ip.addr == 10.10.50.1 and tcp.port == 25 |
| URL | http.host == "hostname" |
| Time | frame.time >= "June 02, 2019 18:04:00" |
| SYN flag | tcp.flags.syn == 1 and tcp.flags.ack == 0 |
| Beacon frames | wlan.fc.type_subtype == 0x08 |
| Broadcast | eth.dst == ff:ff:ff:ff:ff:ff |
| Multicast | (eth.dst[0] & 1) |
| Hostname | ip.host == hostname |
| MAC address | eth.addr == 00:70:f4:23:18:c4 |
| RST flag | tcp.flag.reset == 1 |
๐ ๏ธ Main Toolbar Items
| Icon | Item | Menu | Description |
|---|---|---|---|
| โถ๏ธ | Start | Capture โ Start | Begin capture |
| โน๏ธ | Stop | Capture โ Stop | Stop capture |
| ๐ | Restart | Capture โ Restart | Restart session |
| โ๏ธ | Options | Capture โ Optionsโฆ | Capture options dialog |
| ๐ | Open | File โ Openโฆ | Load capture file |
| ๐พ | Save As | File โ Save Asโฆ | Save capture file |
| โ | Close | File โ Close | Close current capture |
| ๐ | Reload | View โ Reload | Reload capture file |
| ๐ | Find Packet | Edit โ Find Packetโฆ | Search packets |
| โช | Go Back | Go โ Back | Jump back in history |
| โฉ | Go Forward | Go โ Forward | Jump forward |
| ๐ | Go to Packet | Go โ Packet | Jump to specific packet |
| โฉ๏ธ | First Packet | Go โ First Packet | Jump to first packet |
| โช๏ธ | Last Packet | Go โ Last Packet | Jump to last packet |
| ๐ | Auto Scroll | View โ Auto Scroll | Scroll live capture |
| ๐จ | Colorize | View โ Colorize | Colorize packet list |
| ๐ | Zoom In/Out | View โ Zoom In/Out | Adjust zoom level |
| ๐ฒ | Normal Size | View โ Normal Size | Reset zoom |
| ๐ | Resize Columns | View โ Resize Columns | Fit column width |
โ Conclusion
Wireshark is an incredibly powerful tool for analyzing and troubleshooting network traffic. This cheat sheet gives you commands, filters, and shortcuts to navigate Wireshark efficiently and quickly.
Google Search โ One-page cheat-sheet
A compact, copy-pasteable markdown cheat-sheet with short explanations and ready examples.
Core operators (fast, precise)
-
related:โ Find sites similar to a domain. Example:related:clientwebsite.com -
site:โ Search only inside a specific website. Example:burnout at work site:hbr.org -
intitle:infographicโ Pages that call out โinfographicโ in the title. Example:gdpr intitle:infographic -
filetype:โ Restrict results to a file format (pdf, docx, ppt). Example:consulting case interview filetype:pdf -
intitle:2022โ Find pages with a specific year in the title (good for reviews). Example:intitle:2022 laptop for students -
-(minus) โ Exclude words to reduce noise. Example:meta -facebook -
-site:โ Exclude an entire domain. Example:data visualization -site:youtube.com -site:pinterest.com -
"exact phrase"โ Exact-match a full phrase. Example:"that's where google must be down" -
*(wildcard) โ Placeholder for unknown words. Example:"top * programming languages 2024" -
+โ Force inclusion / niche focus. Example:app annie +shopping -
ORโ Return results that match either term. Example:growth marketing OR content marketing OR product marketing
Region & time filters
-
Country TLD with
site:โ Limit to country-level domains. Example:vaccine site:.usorvaccine site:.fr -
Date tools (Google โ Tools โ Any time) โ Filter by recency (e.g., Past year). Example workflow: Search
google tasks tipsโ Tools โ selectPast year
Image quick tip
- Transparent backgrounds โ Images โ Tools โ Color โ Transparent.
Example:
company logoโ Tools โ Color โ Transparent
Quick reference (your numbered list mapped to operators)
- Exact search:
"search" - Site search:
site: - Exclude:
-search - After date:
after:YYYY-MM-DD(useful for single-date filtering) - Range:
YYYY-MM-DD..YYYY-MM-DD(orfirst..secondfor numbers) - Compare / either/or:
(A|B) CorA OR B C - Wildcard:
*search(use*inside phrases) - File type:
filetype:pdf
Combine operators โ practical combos
-
Find recent PDFs from universities:
site:edu filetype:pdf intitle:2023 -
Search product reviews excluding YouTube:
"laptop review" intitle:2024 -site:youtube.com -
Regional news about vaccines:
vaccine site:.de after:2024-01-01 -
Narrow Q&A on a topic:
"how to build REST API" site:stackoverflow.com
Copy-paste cheat block
related:clientwebsite.com
burnout at work site:hbr.org
gdpr intitle:infographic
consulting case interview filetype:pdf
intitle:2022 laptop for students
meta -facebook
data visualization -site:youtube.com -site:pinterest.com
"that's where google must be down"
"top * programming languages 2024"
app annie +shopping
growth marketing OR content marketing OR product marketing
vaccine site:.us