Discover a step-by-step guide to installing the Opera browser on your favorite Ubuntu distribution or on the other popular Linux distros.
The post How to Install Opera Browser on Ubuntu and Other Linux Distros appeared first on Linux Today.
The system’s man pages contain a wealth of useful information. But sometimes it’s hard to see the wood for the trees. Step forward alternatives to man.
The post Alternatives to Popular CLI Tools: man appeared first on Linux Today.
Wiki.js is a modern open-source wiki software. This guide explains how to install Wiki.js using Podman and Docker Compose on Debian Linux 12.
The post Install Wiki.js Using Podman And Docker Compose In Debian 12 appeared first on Linux Today.
If you’re a Linux enthusiast or a seasoned sysadmin, you already know the importance of online privacy and security. A reliable VPN is a must-have tool in your arsenal, and ProtonVPN is one of the best options out there, due to its strong encryption, no-logs policy, and open-source transparency. In this guide, I’ll show you […]
The post How to Set Up ProtonVPN on Debian 12 appeared first on Linux Today.
In this tutorial, we are going to explain what the ERR_CONNECTION_RESET error is and how to fix it. This issue occurs when the connection between the browser and the website (server) unexpectedly closes. The terminated connection is due to the server terminating the session before completing the data transfer. In other words, the browser sent […]
The post What is the ERR_CONNECTION_RESET Error and How to Fix It? appeared first on Linux Today.
FastAPI is a web framework based on Python for creating API services. It’s a modern, fast, high-performance framework supporting asynchronous operations.
The post How to Install Fast API with MongoDB on Ubuntu 24.04 appeared first on Linux Today.
Memory management is a critical aspect of modern operating systems, ensuring efficient allocation and deallocation of system memory. Linux, as a robust and widely used operating system, employs sophisticated techniques for managing memory efficiently. Understanding key concepts such as page tables, swapping, and memory allocation is crucial for system administrators, developers, and anyone working with Linux at a low level.
This article provides a look into Linux memory management, exploring the intricacies of page tables, the role of swapping, and different memory allocation mechanisms. By the end, readers will gain a deep understanding of how Linux handles memory and how to optimize it for better performance.
Linux, like most modern operating systems, implements virtual memory to provide processes with an illusion of a vast contiguous memory space. Virtual memory enables efficient multitasking, isolation between processes, and access to more memory than is physically available. The core mechanism facilitating virtual memory is the page table, which maps virtual addresses to physical memory locations.
How Page Tables WorkA page table is a data structure used by the Linux kernel to translate virtual addresses into physical addresses. Since memory is managed in fixed-size blocks called pages (typically 4KB in size), each process maintains a page table that keeps track of which virtual pages correspond to which physical pages.
Due to large address spaces in modern computing (e.g., 64-bit architectures), a single-level page table would be inefficient and consume too much memory. Instead, Linux uses a hierarchical multi-level page table approach:
Single-Level Page Table (Used in older 32-bit systems with small memory)
Two-Level Page Table (Improves efficiency by breaking down page tables into smaller chunks)
Three-Level Page Table (Used in some architectures for better scalability)
Four-Level Page Table (Standard in modern 64-bit Linux systems, breaking addresses into even smaller sections)
Each level helps locate the next portion of the page table until the final entry, which contains the actual physical address.
Page Table Entries (PTEs) and Their ComponentsA Page Table Entry (PTE) contains essential information, such as:
The physical page frame number.
High dynamic range imaging (HDR) is an important technology for photographers.
The post 11 Best Free and Open Source Linux HDR Imaging Software appeared first on Linux Today.
Killport is a Linux command-line tool that allows users to quickly kill processes listening to a single or multiple ports.
The post Killport: Stopping Processes by Port Number in Linux appeared first on Linux Today.
Kdenlive 24.12.2 open-source video editing software brings bug fixes, including UI resizing, proxy clip handling, and improved Speech-to-Text support.
The post Kdenlive 24.12.2 Brings Bug Fixes and UI Improvements appeared first on Linux Today.
After Asahi Linux’s founder and lead developer abruptly quit on Thursday, the project moves forward with seven developers sharing the lead role.
The post Asahi Linux Adopts Collective Leadership After Lead Dev Quits appeared first on Linux Today.
Software package management is an essential skill for any system administrator working with Linux distributions such as CentOS and RHEL (Red Hat Enterprise Linux). Managing software efficiently ensures that your system remains secure, up-to-date, and optimized for performance.
CentOS and RHEL utilize two primary package managers: Yum (Yellowdog Updater, Modified) and DNF (Dandified Yum). While Yum has been the default package manager in older versions (CentOS/RHEL 7 and earlier), DNF replaces Yum starting from CentOS 8 and RHEL 8, offering improved performance, dependency resolution, and better memory management.
In this guide, we will explore every aspect of software package management using Yum and DNF, from installing, updating, and removing packages to managing repositories and handling dependencies.
Yum (Yellowdog Updater, Modified) is a package management tool that helps users install, update, and remove software packages on CentOS and RHEL systems. It manages software dependencies automatically, ensuring that required libraries and dependencies are installed along with the package.
What is DNF?DNF (Dandified Yum) is the next-generation package manager introduced in CentOS 8 and RHEL 8. It provides faster package management, better memory efficiency, and improved dependency resolution compared to Yum. Although Yum is still available in newer versions, it acts as a symbolic link to DNF.
Key advantages of DNF over Yum:
Improved performance and speed
Reduced memory usage
Better dependency management
Enhanced security and modularity
Before installing or updating software, it is good practice to ensure that the system package repositories are up to date.
Using Yum (CentOS/RHEL 7 and Earlier)
yum check-update
yum update
Using DNF (CentOS/RHEL 8 and Later)
dnf check-update
dnf update
The update
command refreshes package lists and ensures that installed software is up to date.
Software packages can be installed from official or third-party repositories.
Using Yum
yum install package-name
Using DNF
dnf install package-name
Example:
In the world of system administration, effective log management is crucial for troubleshooting, security monitoring, and ensuring system stability. Logs provide valuable insights into system activities, errors, and security incidents. Ubuntu, like most Linux distributions, relies on a logging mechanism to track system and application events.
One of the most powerful logging systems available on Ubuntu is Rsyslog. It extends the traditional syslog functionality with advanced features such as filtering, forwarding logs over networks, and log rotation. This article provides guide on managing system logs with Rsyslog on Ubuntu, covering installation, configuration, remote logging, troubleshooting, and advanced features.
Rsyslog (Rocket-fast System for Log Processing) is an enhanced syslog daemon that allows for high-performance log processing, filtering, and forwarding. It is designed to handle massive volumes of logs efficiently and provides robust features such as:
Multi-threaded log processing
Log filtering based on various criteria
Support for different log formats (e.g., JSON, CSV)
Secure log transmission via TCP, UDP, and TLS
Log forwarding to remote servers
Writing logs to databases
Rsyslog is the default logging system in Ubuntu 20.04 LTS and later and is commonly used in enterprise environments.
Before installing Rsyslog, check if it is already installed and running with the following command:
systemctl status rsyslog
If the output shows active (running), then Rsyslog is installed. If not, you can install it using:
sudo apt update
sudo apt install rsyslog -y
Once installed, enable and start the Rsyslog service:
sudo systemctl enable rsyslog
sudo systemctl start rsyslog
To verify Rsyslog’s status, run:
systemctl status rsyslog
Rsyslog’s primary configuration files are:
/etc/rsyslog.conf – The main configuration file
/etc/rsyslog.d/ – Directory for additional configuration files
Rsyslog uses a facility, severity, action model:
In the world of Linux networking, protocols play a crucial role in enabling seamless communication between devices. Whether you're browsing the internet, streaming videos, or troubleshooting network issues, underlying networking protocols such as TCP/IP, UDP, and ICMP are responsible for the smooth transmission of data packets. Understanding these protocols is essential for system administrators, network engineers, and even software developers working with networked applications.
This article provides an exploration of the key Linux networking protocols: TCP (Transmission Control Protocol), UDP (User Datagram Protocol), and ICMP (Internet Control Message Protocol). We will examine their working principles, advantages, differences, and practical use cases in Linux environments.
The TCP/IP model (Transmission Control Protocol/Internet Protocol) serves as the backbone of modern networking, defining how data is transmitted across interconnected networks. It consists of four layers:
Application Layer: Handles high-level protocols like HTTP, FTP, SSH, and DNS.
Transport Layer: Ensures reliable or fast data delivery via TCP or UDP.
Internet Layer: Manages addressing and routing with IP and ICMP.
Network Access Layer: Deals with physical transmission methods such as Ethernet and Wi-Fi.
The TCP/IP model is simpler than the traditional OSI model but still retains the fundamental networking concepts necessary for communication.
TCP is a connection-oriented protocol that ensures data is delivered accurately and in order. It is widely used in scenarios where reliability is crucial, such as web browsing, email, and file transfers.
Key Features of TCP:Reliable Transmission: Uses acknowledgments (ACKs) and retransmissions to ensure data integrity.
Connection-Oriented: Establishes a dedicated connection before data transmission.
Ordered Delivery: Maintains the correct sequence of data packets.
Error Checking: Uses checksums to detect transmission errors.
Connection Establishment – The Three-Way Handshake:
In the realm of Linux, efficiency and productivity are not just goals but necessities. One of the most powerful tools in a power user's arsenal are terminal multiplexers, specifically tmux and Screen. These tools enhance the command line interface experience by allowing users to run multiple terminal sessions within a single window, detach them and continue working in the background, and reattach them at will. This guide delves into the world of tmux and Screen, showing you how to harness their capabilities to streamline your workflow and boost your productivity.
A terminal multiplexer is a software application that allows multiple terminal sessions to be accessed and controlled from a single screen. Users can switch between these sessions seamlessly, without the need to open multiple terminal windows. This capability is particularly useful in remote session management, where sessions need to remain active even when the user is disconnected.
Key Features and BenefitsScreen, developed by GNU, has been a staple among system administrators and power users for decades. It provides the basic functionality needed to manage multiple windows in a single session.
Installing ScreenTo install Screen on Ubuntu or Debian:
sudo apt-get install screen
On Red Hat or CentOS:
sudo yum install screen
On Fedora:
sudo dnf install screen
Linux, a powerhouse in the world of operating systems, is renowned for its robustness, security, and scalability. Central to these strengths is the effective management of users and groups, which ensures secure and efficient access to system resources. This guide delves into the intricacies of user and group management, providing a foundation for both newcomers and seasoned administrators to enhance their Linux system administration skills.
In Linux, a user is anyone who interacts with the operating system, be it a human or a software agent. Users can be categorized into three types:
Root User: Also known as the superuser, the root user has unfettered access to the system. This account can modify any file, run privileged commands, and has administrative rights over other user accounts.
System Users: These accounts are created to run specific services such as web servers or database systems. Typically, these users do not have login capabilities and are used to segregate duties for security purposes.
Regular Users: These are the typical accounts created for actual people using the system. They have more limited privileges compared to the root user, which can be adjusted through group memberships or permission changes.
Each user is uniquely identified by a User ID (UID). The UID for the root user is always 0, while UIDs for other users usually start from 1000 upwards by default.
A group in Linux is a collection of users who share certain privileges and access rights. Groups make it easier to manage permissions for a collection of users, rather than having to assign permissions individually.
Groups are identified by a Group ID (GID), similar to how users are identified by UIDs.
Linux offers a suite of command-line tools for managing users and groups:
Debian-based Linux distributions, such as Ubuntu, Linux Mint, and Debian itself, rely on robust package management systems to install, update, and remove software efficiently. One of the most critical aspects of package management is handling dependencies—ensuring that all required libraries and packages are present for an application to function correctly.
Dependency management is crucial for maintaining system stability, avoiding broken packages, and ensuring software compatibility. This article explores how Debian handles package dependencies, how to manage them effectively, and how to troubleshoot common dependency-related issues.
Debian uses the .deb
package format, which contains precompiled binaries, configuration files, and metadata describing the package, including its dependencies. The primary tools for handling Debian packages are:
dpkg: A low-level package manager used for installing, removing, and querying .deb
packages.
APT (Advanced Package Tool): A high-level package management system that resolves dependencies automatically and fetches required packages from repositories.
Without proper dependency handling, installing a single package could become a nightmare of manually finding and installing supporting files. APT streamlines this process by automating dependency resolution.
Dependencies ensure that an application has all the necessary libraries and components to function correctly. In Debian, dependencies are defined in the package’s control
file. These dependencies are categorized as follows:
Depends: Mandatory dependencies required for the package to work.
Recommends: Strongly suggested dependencies that enhance functionality but are not mandatory.
Suggests: Optional packages that provide additional features.
Breaks: Indicates that a package is incompatible with certain versions of another package.
Conflicts: Prevents the installation of two incompatible packages.
Provides: Allows one package to act as a substitute for another (useful for virtual packages).
For example, if you attempt to install a software package using APT, it will automatically fetch and install all required dependencies based on the Depends
field.
APT simplifies dependency management by automatically resolving and installing required packages. Some essential APT commands include:
Updating package lists: sudo apt update
Linux, renowned for its robustness and security, is a powerful multi-user operating system that allows multiple people to interact with the same system resources without interfering with each other. Proper management of user accounts and permissions is crucial to maintaining the security and efficiency of a Linux system. This article provides an exploration of how to effectively manage user accounts and permissions in Linux.
User accounts are essential for individual users to access and operate Linux systems. They help in resource allocation, setting privileges, and securing the system from unauthorized access. There are mainly two types of user accounts:
Additionally, Linux systems also include various system accounts that are used to run services such as web servers, databases, and more.
Creating a user account in Linux can be accomplished with the useradd
or adduser
commands. The adduser
command is more interactive and user-friendly than useradd
.
sudo adduser newusername
This command creates a new user account and its home directory with default configuration files.
Setting user attributespasswd
command.useradd -d /home/newusername newusername
.useradd -s /bin/bash newusername
.usermod
. For example, sudo usermod -s /bin/zsh username
changes the user's default shell to zsh.userdel -r username
.In Linux, every file and directory has associated access permissions which determine who can read, write, or execute them.
In the world of modern software development and IT infrastructure, containerization has emerged as a transformative technology. It offers a way to package software into isolated environments, making it easier to deploy, scale, and manage applications. While Docker is the most popular containerization technology, there are other solutions that cater to different use cases and needs. One such solution is LXC (Linux Containers), which offers a more full-fledged approach to containerization, akin to lightweight virtual machines.
In this guide, we will explore how LXC works, how to set it up on Ubuntu Server, and how to leverage it for efficient and scalable containerization. Whether you're looking to run multiple isolated environments on a single server, or you want a lightweight alternative to virtualization, LXC can meet your needs. By the end of this article, you will have the knowledge to deploy, manage, and secure LXC containers on your Ubuntu Server setup.
LXC (Linux Containers) is an operating system-level virtualization technology that allows you to run multiple isolated Linux systems (containers) on a single host. Unlike traditional virtualization, which relies on hypervisors to emulate physical hardware for each virtual machine (VM), LXC containers share the host’s kernel while maintaining process and file system isolation. This makes LXC containers lightweight and efficient, with less overhead compared to VMs.
LXC offers a more traditional way of containerizing entire operating systems, as opposed to application-focused containerization solutions like Docker. While Docker focuses on packaging individual applications and their dependencies into containers, LXC provides a more complete environment that behaves like a full operating system.