News & Information       http://info.owt.com

Linux

11/24/2020   Linux Journal
Terminal Vitality - Difference Engine

Ever since Douglas Engelbart flipped over a trackball and discovered a mouse, our interactions with computers have shifted from linguistics to hieroglyphics. That is, instead of typing commands at a prompt in what we now call a Command Line Interface (CLI), we click little icons and drag them to other little icons to guide our machines to perform the tasks we desire. 

Apple led the way to commercialization of this concept we now call the Graphical User Interface (GUI), replacing its pioneering and mostly keyboard-driven Apple // microcomputer with the original GUI-only Macintosh. After quickly responding with an almost unusable Windows 1.0 release, Microsoft piled on in later versions with the Start menu and push button toolbars that together solidified mouse-driven operating systems as the default interface for the rest of us. Linux, along with its inspiration Unix, had long championed many users running many programs simultaneously through an insanely powerful CLI. It thus joined the GUI party late with its likewise insanely powerful yet famously insecure X-Windows framework and the many GUIs such as KDE and Gnome that it eventually supported.

GUI Linux

But for many years the primary role for X-Windows on Linux was gratifyingly appropriate given its name - to manage a swarm of xterm windows, each running a CLI. It's not that Linux is in any way incompatible with the Windows / Icon / Mouse / Pointer style of program interaction - the acronym this time being left as an exercise for the discerning reader. It's that we like to get things done. And in many fields where the progeny of Charles Babbage's original Analytic Engine are useful, directing the tasks we desire is often much faster through linguistics than by clicking and dragging icons.

 

GUI Linux Terminal
A tiling window manager makes xterm overload more manageable

 

A GUI certainly made organizing many terminal sessions more visual on Linux, although not necessarily more practical. During one stint of my lengthy engineering career, I was building much software using dozens of computers across a network, and discovered the charms and challenges of managing them all through Gnu's screen tool. Not only could a single terminal or xterm contain many command line sessions from many computers across the network, but I could also disconnect from them all as they went about their work, drive home, and reconnect to see how the work was progressing. This was quite remarkable in the early 1990s, when Windows 2 and Mac OS 6 ruled the world. It's rather remarkable even today.

Bashing GUIs

11/24/2020   LinuxSecurity.com
An update that solves two vulnerabilities and has one errata is now available.
11/24/2020   LinuxSecurity.com
An update that solves three vulnerabilities and has 17 fixes is now available.
11/24/2020   LinuxSecurity.com
An update that solves three vulnerabilities and has 17 fixes is now available.
11/24/2020   LinuxSecurity.com
An update that fixes two vulnerabilities is now available.
11/24/2020   LinuxSecurity.com
The following vulnerabilities have been discovered in the webkit2gtk web engine: CVE-2020-9948
11/24/2020   LinuxSecurity.com
An update that solves 21 vulnerabilities and has 21 fixes is now available.
11/19/2020   Linux Journal
raspberry-pi-zero-w

I've been playing around with the Raspberry Pi Zero W lately and having so much fun on the command line. For those uninitiated it's a tiny Arm computer running Raspbian, a derivative of Debian. It has a 1 GHz processor that had the ability to be overclocked and 512 MB of RAM, in addition to wireless g and bluetooth.

raspberry pi zero w with wireless g and bluetooth

A few weeks ago I built a garage door opener with video and accessible via the net. I wanted to do something a bit different and settled on a dashcam for my brother-in-law's SUV.

I wanted the camera and Pi Zero W mounted on the dashboard and to be removed with ease. On boot it should autostart the RamDashCam (RDC) and there should also be 4 desktop scripts dashcam.sh, startdashcam.sh, stopdashcam.sh, shutdownshutdown.sh. Also create and a folder named video on the Desktop for the older video files. I also needed a way to power the RDC when there is no power to the vehicle's usb ports. Lastly I wanted it's data accessible on the local LAN when the vehicle is at home.

Here is the parts list:

  1. Raspberry Pi Zero W kit (I got mine from Vilros.com)
  2. Raspberry Pi official camera
  3. Micro SD card, at least 32 gigs
  4. A 3d printed case from thingverse.com
  5. Portable charger, usually used to charge cell phones and tablets on the go
  6. Command strips, it's like double sided tape that's easy to remove or velcro strips

 

First I flashed the SD card with Raspbian, powered it up and followed the setup menu. I also set a static IP address.

Now to the fun stuff. Lets create a service so we can start and stop RDC via systemd. Using your favorite editor, navigate to "/etc/systemd/system/" and create "dashcam.service"  and add the following:

[Unit]
Description=dashcam service
After=network.target
StartLimitIntervalSec=0

[Service]
Type=forking
Restart=on-failure
RestartSec=1
User=pi
WorkingDirectory=/home/pi/Desktop
ExecStart=/bin/bash /home/pi/Desktop/startdashcam.sh

[Install]
WantedBy=multi-user.target

 

Now that's complete lets enable the service, run the following: sudo systemctl enable dashcam

I added these scripts to start and stop RDC on the Desktop so my brother-in-law doesn't have to mess around in the menus or command line. Remember to "chmod +x" these 4 scripts.

 

startdashcam.sh

#!/bin/bash

# remove files older than 3 days
find /home/pi/Desktopvideo -type f -iname '*.flv' -mtime +3 -exec rm {} \;

# start dashcam service
sudo systemctl start dashcam

 

stopdashcam.sh

11/10/2020   Linux Journal
SeaGL - Seattle GNU/Linux Conference

This Friday, November 13th and Saturday, November 14th, from 9am to 4pm PST the 8th annual SeaGL will be held virtually. This year features four keynotes, and a mix of talks on FOSS tech, community and history. SeaGL is absolutely free to attend and is being run with free software!

Additionally, we are hosting a pre-event career expo on Thursday, November 12th from 1pm to 5pm. Counselors will be available for 30 minute video sessions to provide resume reviews and career guidance.

Mission

The Seattle GNU/Linux conference (SeaGL) is a free, as in freedom and tea, grassroots technical summit dedicated to spreading awareness and knowledge about free/libre/open source software, hardware, and culture.

SeaGL strives to be welcoming, enjoyable, and informative for professional technologists, newcomers, enthusiasts, and all other users of free software, regardless of their background knowledge; providing a space to bridge these experiences and strengthen the free software movement through mentorship, collaboration, and community.

Dates/Times

  • November 13th and 14th
  • Friday and Saturday
  • Main Event: 9am-4:30pm
  • TeaGL: 1-2:45pm, both days
  • Friday Social: 4:30-6pm
  • Saturday Party: 6-10pm
  • Pre-event Career Expo: 1-5pm, Thursday November 12th
  • All times in Pacific Timezone

Hashtags

- `#SeaGL2020`

- `#TeaGLtoasts`

Social Media

Reference Links

Best contact: press@seagl.org

11/05/2020   Linux Journal
Hot Swappable Filesystems, as Smooth as Btrfs

Filesystems, like file cabinets or drawers, control how your operating system stores data. They also hold metadata like filetypes, what is attached to data, and who has access to that data. For windows or macOS users

Quite honestly, not enough people consider which file system to use for their computers.

Windows and macOS users have no valid reason to look into filesystems because they have one that’s been widely used since its inception. For Windows that’s NTFS and macOS that’s HFS+. For Linux users, there are plenty of different file system options to choose from. The current default in the Linux field is known as the Fourth Extended Filesystem or ext4.

Currently there is discussion for changes in the filesystem space of Linux. Much like the changes to the default init systems and the switch to systemd a few years ago, there has been a push for changing the default Linux filesystem to the Btrfs. No, I'm not using slang or trying to insult you. Btrfs stands for the B-Tree file system. Many Linux users and sysadmins were not too happy with its initial changes. That could be because people are generally hesitant to change, or because they change may have been too abrupt. A friend once said, "I've learned that fear limits you and your vision. It serves as blinders to what may be just a few steps down the road for you." In this article I want to help ease the understanding of Btrfs and make the transition as smooth as butter. Let’s go over a few things first.

What do Filesystems do?

Just to be clear, we can summarize what filesystems do and what they are used for. Like mentioned before filesystems are used for controlling how data is store after a program is no longer using it, how to access that data, where that data is located, and what is attached to the data itself. As a sysadmin, one of the many tasks and responsibilities is to maintain backups and manage filesystems. Partitioning filesystems help with separating different areas in business environments and is common practice for data retention. An example would be taking a 3TB hard disk and partitioning 1TB for your production environment, 1TB for your development environment, 1TB for company related documents and files. When accidents happen to a specific partition, only the data stored in that partition is affected, instead of the entire 3TB drive in this example. A fun example would be a user testing a script in a development application that begins filling up disk space in the dev partition. Filling up a filesystem accidentally, whether it be from an application or a user’s script or anything on the system, could cause an entire system to stop functioning. If data is partitioned to separate partitions, only the data in that partition will be full or affected, so the production and company data partitions are safe.

11/02/2020   Linux Journal
How to Try Linux Without a Classical Installation

For many different reasons, you may not be able to install Linux on your computer.

Maybe you are not familiar with words like partitioning and bootloader, maybe you share the PC with your family, maybe you don’t feel comfortable to wipe out your hard drive and start over, or maybe you just want to see how it looks before proceeding with a full installation.

I know, it feels frustrating, but no worries, we have got you covered!

In this article, we will explore several ways to try Linux out without the hassle of a classical installation.

Choosing a distribution

In the Linux world, there are several distributions which are quite different between them.

Some are general purpose operating systems, some others are created with a specific use case in mind. That being said, I know how confusing this can be for a beginner.

If you are moving your first steps with Linux and you are still not sure how and why to pick a distribution instead of another one, there are several resources online available to help you.

A perfect example of these resources is the website https://distrochooser.de/ which will walk you through a questionnaire to understand your needs and advice on what distribution could be a good fit for your use case.

Once you have chosen your distribution, there are high chances it will have a live CD image available for testing before the installation. If this is the case, here below you can find many ways to “boot” your live CD ISO image.

MobaLiveCD

MobaLiveCD is an amazing open source application which lets run a live Linux on windows with nearly zero efforts.

Download the application from the official site download page available here and run it.

It will present a screen where you can choose either a Linux Live CD ISO file or a bootable USB drive.

MobaLiveCD

Click on Run the LiveCD, select your ISO file, select no when asked if you want to create a hard disk.

MobaLiveCD prompt

Your Linux virtual machine will boot up “automagically”.

Slackware

10/28/2020   Linux Journal
Creating EC2 Duplicate with Ansible

Many companies like mine use AWS infrastructure as a service (IaaS) heavily. Sometimes we want to perform a potentially risky operation on an EC2 instance. As long as we do not work with immutable infrastructure it is imperative to be prepared for instant revert.

One of the solutions is to use a script that will perform instance duplication, but in modern environments, where unification is an essence it would be wiser to use more common known software instead of making up a custom script.

Here comes the Ansible!

Ansible is a simple automation software. It handles configuration management, application deployment, cloud provisioning, ad-hoc task execution, network automation, and multi-node orchestration. It is marketed as a tool for making complex changes like zero-downtime rolling patching, therefore we have used it for this straightforward snapshotting task.

Requirements

For this example we will only need an Ansible, in my case it was version 2.9 - in subsequent releases there is a major change with introducing collections so let's stick with this one for simplicity.

Due to working with AWS we require a minimal set of permissions, which include permissions to create:

  • AWS snapshots
  • Register images (AMI)
  • Start and stop EC2

Environment preparation

Since I am forced to work on Windows I have utilized Vagrant instances. Please find below a Vagrantfile content.

We are launching a virtual machine, with Centos 7 and Ansible installed.

For security reasons Ansible, by default, has disabled reading configuration from mounted location, therefore we have to implcity indicate path /vagrant/ansible.cfg.

Listing 1. Vagrantfile for our research

Vagrant.configure("2") do |config|
  config.vm.box = "geerlingguy/centos7"
  config.vm.hostname = "awx"
  config.vm.provider "virtualbox" do |vb|
    vb.name = "AWX"
    vb.memory = "2048"
    vb.cpus = 3
  end
  config.vm.provision "shell", inline: "yum install -y git python3-pip"
  config.vm.provision "shell", inline: "pip3 install ansible==2.9.10"
  config.vm.provision "shell", inline: "echo 'export ANSIBLE_CONFIG=/vagrant/ansible.cfg' >> /home/vagrant/.bashrc"
end

First tasks

In the first lines of the Ansible we specify few meta values. Most of them, like name, hosts and tasks are mandatory. Others provide auxiliary functions.

Listing 2. duplicate_ec2.yml playbook first lines

---
- name: yolo
  hosts: localhost
  connection: local
  gather_facts: false
  become: false
  vars:
    instance_id: i-deadbeef007

tasks:

10/27/2020   Linux Journal
TCP Analysis with Wireshark

Transmission Control is an essential aspect of network activity and governs the behavior of many services we take for granted. When sending your emails or just browsing the web you are relying on TCP to send and receive your packets in a reliable fashion. Thanks to two DARPA scientists, Vinton Cerf and Bob Kahn who developed TCP/IP in 1970, we have a specific set of rules that define how we communicate over a network. When Vinton and Bob first conceptualized TCP/IP, they set up a basic network topology and a device that can interface between two other hosts.

Network A Network B

In the Figure 1 we have two networks connected by a single gateway. The gateway plays an essential role in the development of any network and bares the responsibility of routing data properly between these two networks.

Since the gateway must understand the addresses of each host on the network, it is necessary to have a standard format in every packet that arrives. Vince and Bob called this the internetwork header prefixed to the packet by the source host.

Internetwork header

The source and destination entries, along with the IP address, uniquely identify every host on the network so that the gateway can accurately forward packets.

The sequence number and byte count identifies each packet sent from the source, and accounts for all of the text within the segment. The receiver can use this to determine if it has already seen the packet and discard if necessary.

The check sum is used to validate each packet being sent to ensure error free transmission. This checksum uses a false header and encapsulates the data of the original TCP header, such as source/destination entries , header length and byte count .

10/26/2020   Linux Journal
How to Add a Simple Progress Bar in Shell Script

At times, we need to write shell scripts that are interactive and user executing them need to monitor the progress. For such requirements, we can implement a simple progress bar that gives an idea about how much task has been completed by the script or how much the script has executed.

To implement it, we only need to use the “echo” command with the following options and a backslash-escaped character.

-n : do not append a newline
-e : enable interpretation of backslash escapes
\r : carriage return (go back to the beginning of the line without printing a newline)

For the sake of understanding, we will use “sleep 2” command to represent an ongoing task or a step in our shell script. In a real scenario, this could be anything like downloading files, creating backup, validating user input, etc. Also, to give an example we are assuming only four steps in our script below which is why we are using 20,40,60,80 (%) as progress indicator. This can be adjusted as per the number of steps in a script. For instance, a script with three steps can be represented by 33,66,99 (%) or a script with ten steps can be represented by 10-90 (%) as progress indicator.

The implementation looks like the following:

echo -ne '>>>                       [20%]\r'
# some task
sleep 2
echo -ne '>>>>>>>                   [40%]\r'
# some task
sleep 2
echo -ne '>>>>>>>>>>>>>>            [60%]\r'
# some task
sleep 2
echo -ne '>>>>>>>>>>>>>>>>>>>>>>>   [80%]\r'
# some task
sleep 2
echo -ne '>>>>>>>>>>>>>>>>>>>>>>>>>>[100%]\r'
echo -ne '\n'

In effect, every time the “echo” command executes, it replaces the output of the previous “echo” command in the terminal thus representing a simple progress bar. The last “echo” command simply enters a newline (\n) in the terminal to resume the prompt for the user.

The execution looks like the following:

simple progress bar shell execution