News & Information       http://info.owt.com

Linux

03/06/2021   LinuxSecurity.com
A buffer overflow issue in caca_resize function in libcaca/caca/canvas.c may lead to local execution of arbitrary code in the user context.
03/05/2021   LinuxSecurity.com
An update that fixes one vulnerability is now available.
03/05/2021   LinuxSecurity.com
An update that fixes one vulnerability is now available.
03/05/2021   LinuxSecurity.com
Stability update for hardware accelerated backend (mozbz#1694670). ---- New upstream update (86.0). Should also fix some rendering issues in KDE in certain configurations. Depends on and cannot be pushed stable without https://bodhi.fedoraproject.org/updates/FEDORA-2021-bdc10e21fc .
03/05/2021   LinuxSecurity.com
An update that contains security fixes can now be installed.
03/05/2021   LinuxSecurity.com
An update that fixes four vulnerabilities is now available.
03/02/2021   Linux Journal
weLees Visual LVM Manager

Maintenance of the storage system is a daily job for system administrators. Linux provides users with a wealth of storage capabilities, and powerful built-in maintenance tools. However, these tools are hardly friendly to system administrators while generally considerable effort is required for mastery.

As a Linux built-in storage model, LVM provides users with plenty flexible management modes to fit various needs. For users who can fully utilize its functions, LVM could meet almost all needs. But the premise is thorough understanding of the LVM model, dozens of commands as well as accompanying parameters.

The graphical interface would dramatically simplify both learning curve and operation with LVM, in a similar approach as partition tools that are widely used on Windows/Linux platforms. Although scripts with commands are suitable for daily, automatic tasks, the script could not handle all functions in LVM. For instance, manual calculation and processing are still required by many tasks.

Significant effort had been spent on this problem. Nowadays, several graphical LVM management tools are already available on the Internet, some of them are built-in with Linux distributions and others are developed by third parties. But there remains a critical problem: desire for remote machines or headless servers are completely ignored.

This is now solved by Visual LVM Remote. Front end of this tool is developed based on the HTTP protocol. With any smart device that can connect to the storage server, Users can perform management operations.

Visual LVM is developed by weLees Corporation and supports all Linux distributions. In addition to working with remote/headless servers, it also supports more advanced features of LVM compared with various on-shelf graphic LVM management tools.

Dependences of Visual LVM Remote

Visual LVM Remote can work on any Linux distribution that including two components below:

  • LVM2

  • Libstdc++.so

UI of Visual LVM Remote

With a concise UI, partitions/physical volumes/logical volumes are displayed by disk layout. With a glance, disk/volume group information can be obtained immediately. In addition, detailed relevant information of the object will be displayed in the information bar below with the mouse hover on the concerned object.

02/23/2021   Linux Journal
Image
Nvidia Linux Drivers

The recent fiasco with Nvidia trying to block Hardware Unboxed from future GPU review samples for the content of their review is one example of how they choose to play this game. This hatred is not only shared by reviewers, but also developers and especially Linux users.

The infamous Torvalds videos still traverse the web today as Nvidia conjures up another evil plan to suck up more of your money and market share. This is not just one off shoot case; oh how much I wish it was. I just want my computer to work.

If anyone has used Sway-WM with an Nvidia GPU I’m sure they would remember the –my-next-gpu-wont-be-nvidia option.

These are a few examples of many.

The Nvidia Linux drivers have never been good but whatever has been happening at Nvidia for the past decade has to stop today. The topic in question today is this bug: [https://forums.developer.nvidia.com/t/bug-report-455-23-04-kernel-panic-due-to-null-pointer-dereference]

This bug causes hard irrecoverable crashes from driver 440+. This issue is still happening 5+ months later with no end in sight. At first users could work around this by using an older DKMS driver along with a LTS kernel. However today this is no longer possible. Many distributions of Linux are now dropping the old kernels. DKMS cannot build. The users are now FORCED with this “choice”:

{Use an older driver and risk security implications} or {“use” the new drivers that cause random irrecoverable crashes.}

This issue is only going to get more and more prevalent as the kernel is a core dependency by definition. This is just another example of the implications of an unsafe older kernel causing issue for users: https://archlinux.org/news/moving-to-zstandard-images-by-default-on-mkinitcpio/

If you use Linux or care about the implications of a GPU monopoly, consider AMD. Nvidia is already rearing its ugly head and AMD is actually putting up a fight this year.

02/16/2021   Linux Journal
Parallel Shells With xargs Unix

Introduction

One particular frustration with the UNIX shell is the inability to easily schedule multiple, concurrent tasks that fully utilize CPU cores presented on modern systems. The example of focus in this article is file compression, but the problem rises with many computationally intensive tasks, such as image/audio/media processing, password cracking and hash analysis, database Extract, Transform, and Load, and backup activities. It is understandably frustrating to wait for gzip * running on a single CPU core, while most of a machine's processing power lies idle.

This can be understood as a weakness of the first decade of Research UNIX which was not developed on machines with SMP. The Bourne shell did not emerge from the 7th edition with any native syntax or controls for cohesively managing the resource consumption of background processes.

Utilities have haphazardly evolved to perform some of these functions. The GNU version of xargs is able to exercise some primitive control in allocating background processes, which is discussed at some length in the documentation. While the GNU extensions to xargs have proliferated to many other implementations (notably BusyBox, including the release for Microsoft Windows, example below), they are not POSIX.2-compliant, and likely will not be found on commercial UNIX.

Historic users of xargs will remember it as a useful tool for directories that contained too many files for echo * or other wildcards to be used; in this situation xargs is called to repeatedly batch groups of files with a single command. As xargs has evolved beyond POSIX, it has assumed a new relevance which is useful to explore.


Why is POSIX.2 this bad?

A clear understanding of the lack of cohesive job scheduling in UNIX requires some history of the evolution of these utilities.

02/11/2021   Linux Journal
Bypassing Deep Packet Inspection

In some countries, network operators employ deep packet inspection techniques to block certain types of traffic. For example, Virtual Private Network (VPN) traffic can be analyzed and blocked to prevent users from sending encrypted packets over such networks.

By observing that HTTPS works all over the world (configured for an extremely large number of web-servers) and cannot be easily analyzed (the payload is usually encrypted), we argue that in the same manner VPN tunneling can be organized: By masquerading the VPN traffic with TLS or its older version - SSL, we can build a reliable and secure network. Packets, which are sent over such tunnels, can cross multiple domains, which have various (strict and not so strict) security policies. Despite that the SSH can be potentially used to build such network, we have evidence that in certain countries connections made over such tunnels are analyzed statistically: If the network utilization by such tunnels is high, bursts do exist, or connections are long-living, then underlying TCP connections are reset by network operators.

Thus, here we make an experimental effort in this direction: First, we describe different VPN solutions, which exist on the Internet; and, second, we describe our experimental effort with Python-based software and Linux, which allows users to create VPN tunnels using TLS protocol and tunnel small office/home office (SOHO) traffic through such tunnels.

I. INTRODUCTION

Virtual private networks (VPN) are crucial in the modern era. By encapsulating and sending client’s traffic inside protected tunnels it is possible for users to obtain network services, which otherwise would be blocked by a network operator. VPN solutions are also useful when accessing a company’s Intranet network. For example, corporate employees can access the internal network in a secure way by establishing a VPN connection and directing all traffic through the tunnel towards the corporate network. This way they can get services, which otherwise would be impossible to get from the outside world.

II. BACKGROUND

There are various solutions that can be used to build VPNs. One example is Host Identity Protocols (HIP) [7]. HIP is a layer 3.5 solution (it is in fact located between transport and network layers) and was originally designed to split the dual role of IP addresses - identifier and locator. For example, a company called Tempered Networks uses HIP protocol to build secure networks (for sampling see [4]).

02/04/2021   Linux Journal
Knapsack Pro Ruby JavaScript Tests

Automated tests are part of many programming projects, ensuring the software is flawless. The bigger the project, the larger the test suite can be.This can result in automated tests taking a lot of time to run. In this article you will learn how to run automated tests faster with parallel Continuous Integration machines (CI) and what problems can be encountered. The article covers common parallel testing problems, based on Ruby & JavaScript tests.

Knapsack Pro LogoSlow automated tests

Automated tests can be considered slow when programmers stop running the whole test suite on their local machine because it is too time consuming. Most of the time you use CI servers such as Jenkins, CircleCI, Github Actions to run your tests on an external machine instead of your own. When you have a test suite that runs for an hour then it’s not efficient to run it on your computer. Browser end-to-end tests for your web project can take a really long time to execute. Running tests on a CI server for an hour is also not efficient. You as a developer need a fast feedback loop to know if your software works fine. Automated tests should help you with that.

Split tests between many CI machines to save time

A way to save you time is to make CI build as fast as possible. When you have tests taking e.g. 1 hour to run then you could leverage your CI server config and setup parallel jobs (parallel CI machines/nodes). Each of the parallel jobs can run a chunk of the test suite. 

You need to divide your tests between parallel CI machines. When you have a 60 minutes test suite you can run 20 parallel jobs where each job runs a small set of tests and this should save you time. In an optimal scenario you would run tests for 3 minutes per job. 

How to make sure each job runs for 3 minutes? As a first step you can apply a simple solution. Sort all of your test files alphabetically and divide them by the number of parallel jobs. Each of your test files can have a different execution time depending on how many test cases you have per test file and how complex each test case is. But you can end up with test files divided in a suboptimal way, and this is problematic. The image below illustrates a suboptimal split of tests between parallel CI jobs where one job runs too many tests and ends up being a bottleneck.