Tag Archives: zsh

Ghost in the Shell – Part 4

Long time no see. Its been a while since last post in the Ghost in the Shell series. Its also exactly one full year since I started this blog – from the first Ghost in the Shell series article – the Part 1 – that was published on 2018/03/15 day.

Today I would like to show you new pack of useful tricks and features for productive terminal/shell use. Lets start with something simple yet useful.

You may want to check other articles in the Ghost in the Shell series on the Ghost in the Shell – Global Page where you will find links to all episodes of the series along with table of contents for each episode’s contents.

Named Pipes

We all (or at least most :>) know and love pipes in UNIX. For the record – ls | grep match | awk '{print $3}' | sed 's/.jpg//g' – command ‘chains’ like that one πŸ™‚

What is a named pipe then? A manually defined pipe for special purposes. For example some applications – especially the so called Enterprise ones – often do not support UNIX pipes mechanisms – they only can dump something to a file. A great example of such Enterprise software is Oracle database whose dump command can only make dump to a file. With tool that supports UNIX pipes you would probably want to pipe that data to gzip(1)/xz(1) to compress it on the fly or even pipe it directly to ssh(1) to the Backup server for example, but not with Oracle.

This is where named pipes feature helps. We will create named pipe called /tmp/PIPE so Oracle’s dump command will be able to use it and on the other side of this pipe we will attach a pipe to gzip -9 command to compress that data on the fly.

Below example is from Linux system so mknod(1) command will be used. For example on FreeBSD you would use mkfifo(1) command for named pipe. Complete example of such named pipe is presented below.

root # cd /tmp
root # mknod /tmp/PIPE p
root # chown oracle:oinstall /tmp/PIPE
root # dd if=/tmp/PIPE bs=1M | gzip -9 > /mnt/oracle/oracle-database-backup.dmp.gz &

Now the /tmp/PIPE named pipe is ready to be used. When any process will start to write something to the /tmp/PIPE named pipe it will be automatically grabbed by dd(8) command and piped to the gzip(1) command that will compress that input and write it into the /mnt/oracle/oracle-database-backup.dmp.gz file.

Now we can start the Oracle dumping process with dump command.

root # su - oracle
oracle % dump file=/tmp/PIPE

When the dump command finishes its work you will find all your dumped data compressed in the /mnt/oracle/oracle-database-backup.dmp.gz file.

Other example of named pipes usage is my desktop dzen2 setup with unusual update schedule – described in detail in the FreeBSD Desktop – Part 13 – Configuration – Dzen2 article.

Modify Command Environment on the Fly

For most of the time we use export(1) builtin to export needed environment values that our command needs. You can then check what environment exported values are with the env(1) command of course … but you can use the same env(1) command to run any command with modified environment without exporting variables using export(1).

Here is brief example of this feature.

For the record – the gls(1) command is a GNU/Linux ls(1) command from sysutils/coreutils package/port but to make it work without name conflicts on FreeBSD where BSD ls(1) is also present it had to be renamed to gls(1).

% gls -l | head -1
total 8609K

% env LC_ALL=pl_PL.UTF-8 gls -l | head -1
razem 8609K

In the example above we run gls(1) command with default environment – I use en_US.UTF-8 locale daily. The second invocation with LC_ALL=pl_PL.UTF-8 modified environment made gls(1) command display its output in Polish (pl_PL.UTF-8) language. The word ‘razem‘ means ‘total‘ in Polish.

Other useful example may be using make(1) to build FreeBSD port with known vulnerabilities. By default FreeBSD’s build(7) system will not allow us to build such port (and that is good defaults) but if we know what we are doing we will use following spell.

# env DISABLE_VULNERABILITIES=yes make -C /usr/ports/security/bdes/ build install clean

Its also useful with commands that do not play well with UTF-8 input like tr(1) for example. When LC_ALL is set to en_US.UTF-8 it will throw an error upon as.

% tr -cd '0-9' < /dev/random | head -c 16
tr: Illegal byte sequence
%

We just wanted to generate random 16 numbers.

To make it work we will modify the LC_ALL environment for this invocation.

% env LC_ALL=C tr -cd '0-9' < /dev/random | head -c 16
9571949869123855
%

Much better πŸ™‚

Other example with timezones using date(1) command and TZ variable as shown in the example below.

% date
Fri Mar 15 14:03:38 CET 2019

% env TZ=Australia/Darwin date 
Fri Mar 15 22:35:26 ACST 2019

The Real Path

The symlinks with ln(1) are very useful for many ways, to organize stuff, for quick fixes, for versioning … you will find tons of other use cases.

There is just one problem, if you make to many levels or symlinks or its just too much nested you do not know where you are anymore … this is where the realpath(1) comes handy. No matter how many levels of links you have made, it will tell you the truth – what is the current real path. The pwd(1) command will not help you here thou.

Here is a short example how it works.

% pwd
/home/vermaden
% ln -s /home/vermaden ASD
% cd ASD
% pwd
/home/vermaden/ASD
% realpath
/home/vermaden

Browsing the PATH

Many times I wanted to ‘browse’ through the PATH to search for something. As you possibly know the PATH variable stores paths that are colon (:) separated.

You can redefine the IFS variable which by default contains space ‘ ‘ which will work as field delimited for the for loop.

Here is the example.

% export IFS=":"

% for I in $( echo ${PATH} ); do echo ${I}; done
/sbin
/bin
/usr/sbin
/usr/bin
/usr/local/sbin
/usr/local/bin 

% for I in $( echo ${PATH} ); do find ${I} -name ifconfig; done
/sbin/ifconfig

The other way to do this is to use plain old tr tool to translate colons (:) into newlines (\n) so we will be able to use the while loop here.

Here is the tr(1) example.

% echo ${PATH} | tr ':' '\n' | while read I; do echo ${I}; done
/sbin
/bin
/usr/sbin
/usr/bin
/usr/local/sbin
/usr/local/bin

% echo ${PATH} | tr ':' '\n' | while read I; do find ${I} -name dd; done
/bin/dd

You can also achieve same thing using the Parameter Expansion in which we will change the colons (:) into newlines (\n) as shown in the example below.

% echo "${PATH//:/\n}"
/sbin
/bin
/usr/sbin
/usr/bin
/usr/local/sbin
/usr/local/bin

# echo "${PATH//:/\n}" | while read I; do find ${I} -name camcontrol; done
/sbin/camcontrol

Parameter Expansion

I will not show all possible Parameter Expansion methods – just the most useful ones.

The typical use is to get the extension of a file or to ’emulate’ basename(1) or dirname(1) commands – it will be faster to use Parameter Expansion instead of invoking these commands each time. Below are two tables showing what you will get from which Parameter Expansion method.

PARAMETER    RESULT                       DESC 
-----------  ---------------------------  --------------
${name}      kubica.polish.racing.legend  content
${name#*.}          polish.racing.legend  -
${name##*.}                       legend  extension
${name%%.*}  kubica                       -
${name%.*}   kubica.polish.racing         -

… and with slash (/) character.

PARAMETER    RESULT                       DESC 
-----------  ---------------------------  --------------
${name}      kubica/polish/racing/legend  content
${name#*/}          polish/racing/legend  -
${name##*/}                       legend  basename(1)
${name%%.*}  kubica                       root directory
${name%/*}   kubica/polish/racing         dirname(1)

You can also use Parameter Expansion methods to grab the protocol from an URL like shown below.

% URL="https://vermaden.wordpress.com"

% echo "${URL%%/*}"
https:

Sort Human Readable Values

Its simple and easy to sort just numerical values, we use sort -n for that – but values sometimes comes in human readable form like 4G, 350M and 120K. To sort these properly you will have to use sort -h flag as shown in the example below.

% du -sh /usr/*
102M    /usr/bin
228G    /usr/home
9.0M    /usr/include
 53M    /usr/lib
 43M    /usr/lib32
116K    /usr/libdata
1.9M    /usr/libexec
365M    /usr/local
512B    /usr/obj
9.5M    /usr/sbin
 39M    /usr/share
251K    /usr/tests

% du -sh /usr/* | sort -h
512B    /usr/obj
116K    /usr/libdata
251K    /usr/tests
1.9M    /usr/libexec
9.0M    /usr/include
9.5M    /usr/sbin
 39M    /usr/share
 43M    /usr/lib32
 53M    /usr/lib
102M    /usr/bin
365M    /usr/local
228G    /usr/home

If the values are in the first column then its simple but what to do when the values are not in the first column? You will use -k parameter of sort(1) which takes which column to sort as argument. Needed example below sorted bu human readable values and on the second USED column.

% zfs list | sort -h -k 2
NAME                         USED  AVAIL  REFER  MOUNTPOINT
local/usr/obj                 88K   130G    88K  /usr/obj
local/var/cache/pkg          128K   130G   128K  /var/cache/pkg
local/var/cache              216K   130G    88K  none
local/var                    304K   130G    88K  none
sys/ROOT/11.1-RELEASE        482M  2.39G  6.04G  /
local/usr/ports              729M   130G   729M  /usr/ports
local/jail/nextcloud         927M   130G   897M  /jail/nextcloud
local/jail                  1.00G   130G   100M  /jail
local/usr/src               1.28G   130G  1.28G  /usr/src
local/usr                   1.99G   130G    88K  none
sys/ROOT/11.2-RELEASE       8.69G  2.39G  7.10G  /
sys/ROOT                    9.16G  2.39G    88K  none
sys                         9.17G  2.39G    88K  none
local/home                   281G   130G   281G  /home
local                        288G   130G    88K  none

Write a File from vi(1) with Different Rights

How many times you have opened a system configuration file like /etc/sysctl.conf or /etc/fstab in your favorite vi(1) editor, made some changes and then when you wanted to save it – no luck – you are trying to write to file owned by root with regular user … the Read-only file, not written; use ! to override. message will be displayed. Of course you can save that file somewhere else like your home directory and them move it with doas(1)/sudo(8)/su(8) help to original location and fix its rights … or you may do that in one step instead.

After opening a file with vi(1) and some changes to write a file with doas(1)/sudo(8) rights you just need to type this.

:w !doas tee %

Then exit the vi(1) editor with force.

:q!

Here is how it looks in the editor.

:w !doas tee %

+=+=+=+=+=+=+=+
File contents are displayed here.

Press any key to continue [: to enter more ex commands]: [ENTER]

Here is the ‘legend’ for that spell.

:      vi(1) prompt
w      write a file
!doas  invoke doas(1) command
tee    command that will be started using doas(1) command
%      tells vi(1) to use current filename

In this process the current vi(1) contents will be redirected using tee(1) with doas(1) rights to the current (open that you opened) filename.

Of course it also works in vim(1) or neovim(1) and if sudo(8) is your poison then just use sudo instead doas(1) there.

Search Contents of PDF Files

We all love plain text files then they can be searched using grep(1) for data that is interesting for us … but grep(1) does not work with PDF files … or should I say its pointless/useless to use grep(1) to search PDF files. Fortunately pdfgrep(1) command exists and works beautifully with PDF files – including colored output.

Recently FreeBSD Journal has been made free and you will like to search for bhyve articles in FreeBSD Journal issues then this is the command for you.

% cd books/unix-bsd-journal
% exa
FreeBSD Journal - 2014-01-02.pdf FreeBSD Journal - 2016-09-10.pdf
FreeBSD Journal - 2014-03-04.pdf FreeBSD Journal - 2016-11-12.pdf
FreeBSD Journal - 2014-05-06.pdf FreeBSD Journal - 2017-01-02.pdf
FreeBSD Journal - 2014-07-08.pdf FreeBSD Journal - 2017-03-04.pdf
FreeBSD Journal - 2014-09-10.pdf FreeBSD Journal - 2017-05-06.pdf
FreeBSD Journal - 2014-11-12.pdf FreeBSD Journal - 2017-07-08.pdf
FreeBSD Journal - 2015-01-02.pdf FreeBSD Journal - 2017-09-10.pdf
FreeBSD Journal - 2015-03-04.pdf FreeBSD Journal - 2017-11-12.pdf
FreeBSD Journal - 2015-05-06.pdf FreeBSD Journal - 2018-01-02.pdf
FreeBSD Journal - 2015-07-08.pdf FreeBSD Journal - 2018-03-04.pdf
FreeBSD Journal - 2015-09-10.pdf FreeBSD Journal - 2018-05-06.pdf
FreeBSD Journal - 2015-11-12.pdf FreeBSD Journal - 2018-07-08.pdf
FreeBSD Journal - 2016-01-02.pdf FreeBSD Journal - 2018-09-10.pdf
FreeBSD Journal - 2016-03-04.pdf FreeBSD Journal - 2018-11-12.pdf
FreeBSD Journal - 2016-05-06.pdf FreeBSD Journal - 2019-01-02.pdf
FreeBSD Journal - 2016-07-08.pdf

% pdfgrep -i -n bhyve *.pdf
FreeBSD Journal - 2014-01-02 - Old Release.pdf:6: machine hypervisors, such as BHy
FreeBSD Journal - 2014-01-02 - Old Release.pdf:6: BHyVe
FreeBSD Journal - 2014-01-02 - Old Release.pdf:6: BHyVe IS THE BSD Hypervisor, de
FreeBSD Journal - 2014-01-02 - Old Release.pdf:6: Grehan and Neel Natu. The desig
FreeBSD Journal - 2014-01-02 - Old Release.pdf:6: BHyVe requires Intel CPUs w
FreeBSD Journal - 2014-01-02 - Old Release.pdf:6: BHyVe appeared in FreeBSD 1
FreeBSD Journal - 2014-01-02.pdf:42: machine hypervisors, such as BHyVe, Virtual
FreeBSD Journal - 2014-01-02.pdf:42: BHyVe e d
FreeBSD Journal - 2014-01-02.pdf:42: BHyVe IS THE BSD Hypervisor, developed by P
FreeBSD Journal - 2014-01-02.pdf:42: Grehan and Neel Natu. The design goal of BH
FreeBSD Journal - 2014-01-02.pdf:42: BHyVe requires Intel CPUs with VT-x and
FreeBSD Journal - 2014-01-02.pdf:42: BHyVe appeared in FreeBSD 10-CURRENT in
(...)

Here is how it looks in the xterm(1) terminal.

xterm-pdfgrep.png

Hope that today’s pack of spells will end up useful for you.

EOF

Ghost in the Shell – Part 3

Time to bring some life into the Ghost in the Shell series with Part 3 article.

You may want to check other articles in the Ghost in the Shell series on the Ghost in the Shell – Global Page where you will find links to all episodes of the series along with table of contents for each episode’s contents.

Query Functions

I haven’t found better name for that solution. There are generally two types of UNIX people. These that prefer to navigate and operate with basic ls/cd/mv/mkdir/rm commands and those who use some file manager like Midnight Commander (mc) or ranger or vifm or … you get the idea. I have tried various CLI file managers but always came back to navigate without them. If you are one of those people then these Query Functions are for you πŸ™‚

The so called Query Functions are for filter the information you look for. For example if you have directory with large number of files, then you would probably do something like that.

% ls | grep QUERY

… or if you also want to include subdirectories then something like that.

% find . | grep QUERY

For both of these examples you would also probably want to sometimes search case sensitive or insensitive depending on the need.

That leads us to four Query Functions:

  • q is an equivalent of ls | grep -i QUERY command.
  • Q is an equivalent of ls | grep QUERY command.
  • qq is an equivalent of find . | grep -i QUERY command.
  • QQ is an equivalent of find . | grep QUERY command.

Thus if I need to query the contents of directory while searching for something is very fast with q SOMETHING.

These are definitions of these Query Functions:

# SHORT QUERY FUNCTIONS q()
  q() {
    if [ ${#} -eq 1 ]
    then
      ls | grep --color -i ${1} 2> /dev/null
    else
      echo "usage: q string"
    fi
  }
     
# SHORT QUERY FUNCTIONS Q()
  Q() {
    if [ ${#} -eq 1 ]
    then
      ls | grep --color ${1} 2> /dev/null
    else
      echo "usage: Q string"
    fi
  }

# SHORT QUERY FUNCTIONS qq()
  qq() {
    if [ ${#} -eq 1 ]
    then
      find . \
        | grep -i ${1} 2> /dev/null \
        | cut -c 3-999 \
        | grep --color -i ${1} 2> /dev/null
    else
      echo "usage: qq string"
    fi
  }

# SHORT QUERY FUNCTIONS QQ()
  QQ() {
    if [ ${#} -eq 1 ]
    then
      find . \
        | grep ${1} 2> /dev/null \
        | cut -c 3-999 \
        | grep ${1} 2> /dev/null
    else
      echo "usage: QQ string"
    fi
  }

The qq and QQ functions uses grep(1) two times to make sure the output is colored.

I assume that You use colored grep(1) described in Ghost in the Shell – Part 2 article.

If you prefer to use alias(1) instead then they would look like that.

# SHORT QUERY FUNCTIONS q() Q() qq() QQ()
  alias q="ls | grep --color -i"
  alias Q="ls | grep --color"
  alias qq="find . | grep -i"
  alias QQ="find . | grep"

The qq and QQ will be little more limited as with functions its possible to trim the output to the exact needs with cut(1).

q.png

qq.png

Lots of people use recursive history search which also helps, but what if you used/typed needed command long ago with the arguments you need now? You would probably search the command with history(1) command and then using grep(1) to limit the results to what you look for. I keep enormous large list of commands to keep in history – with my current setting of 655360 the ~/.zhistory (ZSH) file takes about 2.7 MB size. I also wanted to be sure that two identical commands would not be kept in history hence the setopt hist_ignore_all_dups ZSH option enabled. When I wc -l my ~/.zhistory file it currently has 75695 lines of commands.

% grep HISTSIZE /usr/local/etc/zshrc
export HISTSIZE=655360
export SAVEHIST=${HISTSIZE}

% grep dups /usr/local/etc/zshrc
setopt hist_ignore_all_dups

Now back to Query Functions for history:

  • h is an equivalent of cat ~/.zhistory | grep -i QUERY command.
  • H is an equivalent of cat ~/.zhistory | grep QUERY command.

They fit in aliases this time. In alias(1) we will use just grep(1) to not ‘do’ Useless Use of Cat.

Here are the Query Functions for history.

# SHORT HISTORY ALIASES h() H()
  alias h='< ~/.zhistory grep -i'
  alias H='< ~/.zhistory grep'

h

… but what if we would like to filter the outputs of q family and h family Query Functions? The obvious response is using grep(1) like q QUERY | grep ANOTHER or h QUERY | grep ANOTHER for example. To make that faster we will make g and G shortcuts.

  • g is an equivalent of grep -i command.
  • G is an equivalent of just grep command.

Here they are.

# SHORT GREP FUNCTIONS g() G()
  alias g='grep -i'
  alias G='grep'

Now it will be just q QUERY | g ANOTHER and h QUERY | G ANOTHER for example.

To clear terminal output you may use clear(1) command, some prefer [CTRL]-[L] shortcut but I find ‘c‘ alias to be the fastest solution.

# SHORT GREP FUNCTIONS c()
  alias c='clear'

To make the solution complete I would also add exa(1) here with an alias of ‘e‘.

# SHORT LISTING WITH e()
  alias e='exa --time-style=long-iso --group-directories-first'

Why exa(1) will you ask while there is BSD ls(1) and GNU ls(1) (installed as gls(1) on FreeBSD to not confuse). To add GNU ls(1) to FreeBSD system use the coreutils package.

Well, the BSD ls(1) has two major cons:

  • It is not able to sort directories first.
  • It selects width for ALL columns based on single longest file name.

BSD-ls.png

The BSD ls(1) was used as following alias:

alias ls='ls -p -G -D "%Y.%m.%d %H:%M"'

The GNU ls(1) does not have these two problems but it does color the output only on the very limited pattern like:

  • Not executable file.
  • Executable file.
  • Directory.
  • Link.
  • Device.

GNU-ls.png

The GNU ls(1) was used as following alias:

gls -p -G --color --time-style=long-iso --group-directories-first --quoting-style=literal

Here is where exa(1) comes handy as it does not have any cons like FreeBSD’s ls(1) and it colors a lot more types of files.

e.png

exa --time-style=long-iso --group-directories-first

Its still very simple coloring based on file extension and not magic number as plain (empty) text file SOME-NOT-FILE.pdf is colored like PDF document.

e-pdf.png

But even this ‘limited’ coloring helps in 99% of the cases and while with BSD ls(1) and GNU ls(1) all of these files ‘seem’ like plain text files with exa(1) its obvious from the start which are plain files, which are images and which are ‘documents’ like PDF files for example.

Where Is My Space

On all UNIX and Linux systems there exists du(1) command. Combined with sort(1) it is universal way of searching for space eaters. Example for the / root directory with -g flag to display units in gigabytes.

# cd /
# du -sg * | sort -n
1       bin
1       boot
1       compat
1       COPYRIGHT
1       data
1       dev
1       entropy
1       etc
1       lib
1       libexec
1       media
1       mnt
1       net
1       proc
1       rescue
1       root
1       sbin
1       sys
1       tmp
1       var
2       jail
8       usr
305     home

Contents of UNIX System Resources directory with -m flag to display unit in megabytes.

# cd /usr
# du -sm * | sort -n
1       libdata
1       obj
1       tests
3       libexec
11      sbin
13      include
45      lib32
56      lib
58      share
105     bin
1080    ports
1343    src
5274    local

But its PITA to type cd and du all the time, not to mention that some oldschool UNIX systems does not provide -g or -m flags so on HP-UX you are limited to kilobytes at most.

You may also try -h (human readable) with sort -h (sort human readable) du(1) variant.

# du -smh * | sort -h
512B    data
512B    net
512B    proc
512B    sys
4.5K    COPYRIGHT
4.5K    entropy
5.5K    dev
6.5K    mnt
 53K    media
143K    tmp
205K    libexec
924K    bin
2.2M    etc
3.9M    root
4.6M    sbin
6.2M    rescue
6.6M    lib
 90M    boot
117M    compat
564M    jail
667M    var
5.4G    usr
297G    home

This is where ncdu(1) comes handy. Its ncurses based disk usage analyzer which helps finding that space eaters in very fast time without typing the same commands over and over again. Here is ncdu(1) in action.

First it calculates the sizes of the files.

ncdu.png

After a while you get the output sorted by size.

ncdu-usr.png

If you hit [ENTER] on the directory you will be instantly moved into that directory.

ncdu-usr-local.png

If you delete something with ‘d‘ then remember to recalculate the output with ‘r‘ letter.

It also has great options such as spawning shell ‘b‘ in the current directory or toggle between apparent size and disk usage with ‘a‘ option. The latter is very useful when you use filesystem with builtin compression like ZFS.

       up, k  Move cursor up
     down, j  Move cursor down
 right/enter  Open selected directory
  left, <, h  Open parent directory
           n  Sort by name (ascending/descending)
           s  Sort by size (ascending/descending)
           C  Sort by items (ascending/descending)
           d  Delete selected file or directory
           t  Toggle dirs before files when sorting
           g  Show percentage and/or graph
           a  Toggle between apparent size and disk usage
           c  Toggle display of child item counts
           e  Show/hide hidden or excluded files
           i  Show information about selected item
           r  Recalculate the current directory
           b  Spawn shell in current directory
           q  Quit ncdu

The apparent size using the du(1) command.

Disk usage.

% du -sm books
39145   books

Apparent size.

% du -smA books
44438   books

So I have 1.13 compression ratio on the ZFS filesystem. More then 5 GB saved just in that directory πŸ™‚

Where Are My Files

Once I got some space back I also wanted to know if there are some directories with enormous amount of very small files.

First I came up with my own files-count.sh script solution which is not that long.

#! /bin/sh

export LC_ALL=C

if [ ${#} -eq 0 ]
then
  DIR=.
else
  DIR="${1}"
fi

find "${DIR}" -type d -maxdepth 1 -mindepth 1 \
  | cut -c 3- \
  | while read I
    do
      find "${I}" | wc -l | tr -d '\n'
      echo " ${I}"
    done | sort -n

It works reliably but same as with du | sort tandem you have to retype it (or at least use cd(1) and hit [UP] arrow again) … but then I discovered that ncdu(1) also counts files! It does not provide ‘startup’ argument to start in this count files mode but when you hit ‘c‘ letter it will instantly display count of files in each scanned directory. To sort this output by the count of files hit the ‘C‘ letter (large ‘C‘ letter).

ncdu-files.png

The files-count.sh script still has one advantage over ncdu(1) – the latter stops counting files at 100k which is shown on the screenshot so if You need to search for really big amount of files or just about 100k then files-count.sh script will be more accurate/adequate.

% cd /usr
% files-count.sh 
       1 obj
      36 libdata
     299 sbin
     312 libexec
     390 tests
     498 bin
     723 lib32
     855 lib
    2127 include
   16936 share
  159945 src
  211854 ports
  266021 local

… but what if there were some very big files hidden somewhere deep in the directories tree? The du(1) or ncdu(1) will not help here. As usual I though about short files-big.sh script that will do the job.

#! /bin/sh

export LC_ALL=C

if [ ${#} -eq 0 ]
then
  DIR=.
else
  DIR="${1}"
fi

find "${DIR}" -type f -exec stat -f "%16z; doas rm -f \"%N\"" {} ';' | sort -n

An example usage on the /var directory.

# cd /var
# files-big.sh | tail
        10547304; doas rm -f "./tmp/kdecache-vermaden/icon-cache.kcache"
        29089823; doas rm -f "./db/clamav/clamav-2671b72fce703c2133c61e5bf85aad19.tmp/clamav-373e311ca7f610a39c7cf5c5c5a4fd83.tmp/daily.hdb"
        30138884; doas rm -f "./tmp/pkg-provides-wyK2"
        48271360; doas rm -f "./db/pkg/repo-HardenedBSD.sqlite"
        54816768; doas rm -f "./db/pkg/repo-FreeBSD.sqlite"
        66433024; doas rm -f "./db/pkg/local.sqlite"
        82313216; doas rm -f "./db/clamav/clamav-2671b72fce703c2133c61e5bf85aad19.tmp/clamav-373e311ca7f610a39c7cf5c5c5a4fd83.tmp/daily.hsb"
       117892267; doas rm -f "./db/clamav/main.cvd"
       132431872; doas rm -f "./db/clamav/daily.cld"
       614839082; doas rm -f "./db/pkg/provides/provides.db"

The output is in ‘executable’ format so if you select whole line and paste it into terminal, then this file will be deleted. By default it uses doas(1) but nothing can stop you from putting sudo(8) there. Not sure if you will find it useful but it helped me at least dozen times.

How Many Copies Do You Keep

I often find myself keeping the same files in several places which also wastes space (unless you use ZFS deduplication of course).

The dedup.sh script I once made is little larger so I will not paste it here and just put a link to it.

It has the following options available. You may search/compare files by name or size (fast) or by its MD5 checksum (slow).

% dedup.sh
usage: dedup.sh OPTION DIRECTORY
  OPTIONS: -n   check by name (fast)
           -s   check by size (medium)
           -m   check by md5  (slow)
           -N   same as '-n' but with delete instructions printed
           -S   same as '-s' but with delete instructions printed
           -M   same as '-m' but with delete instructions printed
  EXAMPLE: dedup.sh -s /mnt

Simple usage example.

% cd misc/man
% cp zfs-notes zfs-todo
% dedup.sh -M .
count: 2 | md5: 4ff4be66ab7e5484de2bf7c168ff995a
  doas rm -rf "./zfs-notes"
  doas rm -rf "./zfs-todo"

count: 2 | md5: 6d87f5b1317ea189165fcdc71380735c
  doas rm -rf "./x11"
  doas rm -rf "./xinit"

By copying the zfs-notes file into the zfs-todo file I wanted to show you what dedup.sh will print on the screen, but accidentally I also found another duplicate πŸ™‚

The output of dedup.sh is simple and like with files-big.sh script selecting the while line and pasting it into the terminal will remove the duplicate. By default it uses doas(1) but you can change it into sudo(8) if that works better for you.

Unusual cron(1) Intervals

Most of us already remember what the five fields of crontab(5) file mean, but what if you would like to run command every second … or after reboot only? The answer lies in the man 5 crontab page. Here are these exotic options.

string          meaning
------          -------
@reboot         Run once, at startup of cron.
@yearly         Run once a year, "0 0 1 1 *".
@annually       (same as @yearly)
@monthly        Run once a month, "0 0 1 * *".
@weekly         Run once a week, "0 0 * * 0".
@daily          Run once a day, "0 0 * * *".
@midnight       (same as @daily)
@hourly         Run once an hour, "0 * * * *".
@every_minute   Run once a minute, "*/1 * * * *".
@every_second   Run once a second.

Check cron(1) Environment

Many times I found myself lost lots of time debugging what went wrong when my script was run by the crontab(5) file. Often it was some variable missing or some command or script I used was not in the PATH variable.

To make that debugging faster You can use ENV.sh script to just store the cron(1) environment.

% cat ENV.sh
env > /tmp/ENV.out

The ENV.sh script will write current environment in the /tmp/ENV.out file.

Lets put it into the crontab(5) for a test.

% crontab -l | grep ENV
@every_second ~/ENV.sh

Now after at most a second you can check for the contents of the /tmp/ENV.out file.

% cat /tmp/ENV.out
LOGNAME=vermaden
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin
PWD=/home/vermaden
HOME=/home/vermaden
USER=vermaden
SHELL=/bin/sh

Now you can easily debug the scripts run by the crontab(5) … at least on the environment part πŸ™‚

Simple HTTP Server

I found myself many times in a situation that I would want to allow download of some files from my machine and SSH could not be used.

This is when python(1) comes handy. It has SimpleHTTPServer (or http.server in Python 3 version) so you can instantly start HTTP server in any directory!

Here are the commands for both Python versions.

  • Python 2.x – python -m SimpleHTTPServer PORT
  • Python 3.x – python -m http.server PORT

I even made a simple http.sh wrapper script to make it even more easy.

#! /bin/sh

if ${#} -ne 1 ]
then
  echo "usage: ${0##*/} PORT"
  exit 1
fi

python -m SimpleHTTPServer ${1}

Example usage.

% cd misc/man
% http.sh 8080
Serving HTTP on 0.0.0.0 port 8080 ...
127.0.0.1 - - [14/Sep/2018 23:06:50] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [14/Sep/2018 23:06:50] code 404, message File not found
127.0.0.1 - - [14/Sep/2018 23:06:50] "GET /favicon.ico HTTP/1.1" 404 -
127.0.0.1 - - [14/Sep/2018 23:09:15] "GET /bhyve HTTP/1.1" 200 -

To stop it simply hit [CTRL]-[C] interrupt sequence.

Here is how it looks in the Epiphany browser.

http.png

Simple FTP Server

Similarly with FTP service, another Python goodie called pyftpdlib (Python FTP Server Library) provides that.

Mine ftp.py wrapper is little bigger as you can write quite comlicated setups with pyftpdlib but mine is simple, it starts in the current directory and adds read only anonymous user and read/write user named writer with WRITER password.

#! /usr/bin/env python

from sys                   import argv,exit
from pyftpdlib.authorizers import DummyAuthorizer
from pyftpdlib.handlers    import FTPHandler
from pyftpdlib.servers     import FTPServer

if len(argv) != 2:
  print "usage:", argv[0], "PORT"
  print
  exit(1)
  
authorizer = DummyAuthorizer()
authorizer.add_user("writer", "WRITER", ".", perm="elradfmw")
authorizer.add_anonymous(".")
handler = FTPHandler
handler.authorizer = authorizer
handler.passive_ports = range(60000, 60001)
address = ("0.0.0.0", argv[1])
ftpd = FTPServer(address, handler)
ftpd.serve_forever()

The ftp.py is handy if you want to enable someone to upload something for you (or you are doing it o the other machine) when SSH/SCP is not possible for some reason.

To stop it simply hit [CTRL]-[C] interrupt sequence.

Here is its terminal startup and logs.

% cd misc/man
% ftp.py 2121
[I 2018-09-14 23:21:53] }}} starting FTP server on 0.0.0.0:2121, pid=64399 {{{
[I 2018-09-14 23:21:53] concurrency model: async
[I 2018-09-14 23:21:53] masquerade (NAT) address: None
[I 2018-09-14 23:21:53] passive ports: 60000->60000

… and how Firefox renders its contents.

ftp.png

Hope you will find some of these useful, see you at Part 4 some day.

EOF

Ghost in the Shell – Part 2

The article in the Ghost in the Shell series was the first post on my blog, so while I was busy by writing various server related articles and recently the FreeBSD Desktop series its about time for the Part 2 of the Ghost in the Shell series.

You may want to check other articles in the Ghost in the Shell series on the Ghost in the Shell – Global Page where you will find links to all episodes of the series along with table of contents for each episode’s contents.

Lets start with something simple – yet powerful and time saving.

Alias with Arguments

One may of course write any function to do similar job, but keeping track and ‘maintaining’ all those functions becomes complicated and one has to organize itself. This partially applies to aliases, but they are smaller and easier to maintain then whole functions. In any modern shell an alias(1) can also have arguments, while You will not be able to parse them as appropriate as with functions, they do the job for their basic use.

Here is an example of such alias(1) with arguments.

% ls
gfx/ info/ misc/ scripts/ tmp/

% alias lsg='ls | grep'

% lsg gfx
gfx/

Color grep(1) Patterns

As we already ‘touched’ the grep(1) command topic, lets make it more usable by highlighting the found results in color. The ${GREP_COLOR} variable is used for that purpose and it expects a number for a color, here is the table with number-color format.

Color    Number
Black    30
Red      31
Green    32
Yellow   33
Blue     34
Magenta  35
Cyan     36
White    37

You may as well use ‘bold’ output by adding ‘1;‘ before the number, for example.

% echo ${GREP_COLOR}
1;31

You will also have to make an alias(1) to grep(1) with --color argument, like that:

% alias grep='grep --color'

Here is how it looks in practice.

% export GREP_COLOR=31
% alias grep='grep --color'
% dmesg | grep SMP
FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs
FreeBSD/SMP: 1 package(s) x 2 core(s)
SMP: AP CPU #1 Launched!

Here is how it looks on the xterm(1) terminal.

ghost-terminal

Process Management

This one is very useful on any UNIX system, does not matter if its server or desktop.

These are commands and operands that will help us manage processes started by hand:

  • &
  • fg
  • bg
  • jobs
  • kill
  • disown
  • nohup
  • [CTRL]+[Z]
  • [CTRL]+[C]

As you probably already know to start command ‘in the background’ – which means do what I tell you but do not block the terminal – you have to add ‘&‘ (ampersand) at the end of such command. That command does not magically go away and as long as its running its visible by the jobs(1) command. You may use ‘-l‘ switch to also show the PID of background processes.

% galculator &
[1] 8449

% jobs
[1]  + running    galculator

% jobs -l
[1]  + 8449 running    galculator

Now, what of you forget to add ‘&‘ (ampersand) at the end of command but you wanted to put it into the background? Hit [CTRL]+[Z] shortcut (Control key with ‘small’ Z letter) and the process will be put into the suspended state. Now you have several options, you can out that process into the background with bg(1) command – by default it uses last suspended job – %1, you can also bring it back into the foreground blocking the terminal with fg(1) command. You can also list its state with jobs(1) and of course kill(1) it either with PID showed by jobs -l command or by specifying the process number – %1 in that case.

Here is an example.

% galculator
^Z
zsh: suspended  galculator

% jobs
[1]  + suspended  galculator

% bg
[1]  + continued  galculator

% jobs -l
[1]  + 72892 running    galculator

% kill %1
[1]  + terminated  galculator

%

While fg(1) and bg(1) allow you to put command in the background or foreground respectively when the process is in suspended state, one may ask how to ‘switch’ a process to suspended state while its already running in the background. Its done with kill -17 signal called SIGSTOP. You can also bring back such suspended process to running state with kill -19 signal called SIGCONT … or just again use fg(1) or bg(1) command. Other difference between fg(1)/bg(1) commands and more ‘direct’ kill -17/kill -19 commands are that kill(1) does not inform the user what has changed to the process. You may as well use kill -SIGCONT syntax or kill -s SIGCONT if that is more readable for you.

% galculator
^Z
zsh: suspended  galculator

% bg
[1]  + continued  galculator

% xcalc
^Z
zsh: suspended  xcalc

% jobs -l
[1]  - 19537 running    galculator
[2]  + 20563 suspended  xcalc

% kill -17 %1
[1]  + suspended (signal)  galculator

% jobs -l
[1]  + 19537 suspended (signal)  galculator
[2]  - 20563 suspended  xcalc

% kill -SIGCONT %1
% bg %2
[2]  - continued  xcalc

% jobs -l
[1]  + 19537 running    galculator
[2]  - 20563 running    xcalc

Also check man kill and man signal for more information.

What about disown(1) then? Its a ‘magic’ helper when you start some long running jobs directly at the terminal without Screen or Tmux and you need to disconnect that terminal, for example because you are taking your laptop with you. When you do this – depending on the settings of the current shell – the processes in the background may be killed or ‘moved’ to PID 1 (the init(1) of course) as the PPID (Parent PID). To achieve that we will used that disown(1) command. Once you ‘disown’ a process it will no longer be show by the jobs(1) command, but it will run ‘pinned’ to the init(1) process after you disconnect the terminal session.

% galculator
^Z
zsh: suspended  galculator

% bg
[1]  + continued  galculator

% jobs -l
[1]  + 98556 running    galculator

% disown %1

% jobs -l

% pgrep galculator
98556

% pstree -p 98556
─┬◆ 00001 root /sbin/init --
 └─┬─ 48708 vermaden xterm
   └─┬◆ 52463 vermaden -zsh (zsh)
     └──◆ 98556 vermaden galculator

Now its still pinned to the shell in the xterm(1) terminal. After we close the xterm(1) window (or kill that zsh(1) shell) it will switch to init(1) as PPID (Parent PID).

% pstree -p 98556
─┬◆ 00001 root /sbin/init --
 └──◆ 98556 vermaden galculator

% pgrep -P 1 galculator
98556

We are left with nohup(1) then, when and why to use it as we already has great disown(1) magic? Well, disown(1) is not always available, so when You need to put some command into the long background run and disconnect after it its the best possible option. By default the nohup(1) command will log the output of started command into the nohup.out file. Remember that nohup(1) will still run the process in the foreground, to put it into the background use ‘&‘ (ampersand) or [CTRL]+[Z] with bg(1) combo.

% nohup galculator
appending output to nohup.out
^Z
zsh: suspended  nohup galculator

% bg
[1]  + continued  nohup galculator

% jobs -l
[1]  + 22322 running    nohup galculator

% pstree -p 22322
─┬◆ 00001 root /sbin/init --
 └─┬─ 89568 vermaden xterm
   └─┬◆ 91486 vermaden -zsh (zsh)
     └──◆ 22322 vermaden galculator

… and after disconnect out process switched to init(1) as PPID.

% pstree -p 22322
─┬◆ 00001 root /sbin/init --
 └──◆ 22322 vermaden galculator

You may of course end a running process in the foreground with [CTRL]+[C] shortcut, but that is probably already known to you. I just mention it for the ‘completeness’ of the guide.

% galculator
^C

%

Which Which

While the which(1) command shows the full path of the executable found in the first directory of the ${PATH} variable, it also shows what alias is used for that command it there is one. One may ask how then to find information about absolute executable path if it shows and alias(1) instead. Well, you have to use unalias(1) on that command, so which(1) would be showing full path again.

% which caja
caja: aliased to caja --browser --no-desktop

% unalias caja

% which caja
/usr/local/bin/caja

Also be sure to check Smylers comment below about the difference between shell builtin which and /bin/which command.

Record Session

If you have used PuTTY or MobaXterm in your work, then you appreciate the possibility of saving the terminal output to a file, foe example for the documentation purposes. This is also available ‘natively’ in the shell by using the script(1) command. Remember that script(1) will record also ‘special’ characters like colors, so to properly ‘replay’ the session you may want to either use script(1) or cat(1) commands for that or use less with -R argument.

Here is example recorded script(1) session.

% script script.out
Script started, output file is script.out

% ls
gfx info misc scripts tmp unix.png

% uname -spr
FreeBSD 11.2-RELEASE amd64

% exit
Script done, output file is script.out

% cat script.out
Script started on Sun Jul  8 08:24:06 2018
You have mail.
% ls | grep gfx
gfx
% uname -spr
FreeBSD 11.2-RELEASE amd64
% exit
exit

Script done on Sun Jul  8 08:24:20 2018

% less -R script.out
Script started on Sun Jul  8 08:24:06 2018
You have mail.
% ls | grep gfx
gfx
% uname -spr
FreeBSD 11.2-RELEASE amd64
% exit
exit

Script done on Sun Jul  8 08:24:20 2018

% less script.out
Script started on Sun Jul  8 08:24:06 2018
You have mail.
% ls | grep gfx
ESC[1;31mgfxESC[00mESC[K
% uname -spr
FreeBSD 11.2-RELEASE amd64
% exit
exit

Script done on Sun Jul  8 08:24:20 2018


Edit Command Before Executing

Sometimes you have long multi-line command to execute, so often it is crafted in you favorite ${EDITOR} and then pasted into the terminal. To omit copying and pasting yo may want to check fc(1) command which serves similar purpose. After you type a command, for example simple ls(1) command, and then you type fc(1) command, then fc(1) will take that ls(1) command into your favorite text editor from ${EDITOR} variable, will allow you to edit it and if you save and exit the that editor, it will execute it.

Lets see how it behave by example.

% ls
gfx   books   download   scripts

% fc

Now you are taken into the ${EDITOR} which is vi(1) in my case.

      1 ls
~
~
~
/tmp/zsh999EQ6: unmodified: line 1

Lets made some changes.

      1 ls -l \
      2    -h
~
~
~
~

:wq

After you hit [ENTER] it will exit from ${EDITOR} and execute that command.

total 6181
drwxr-xr-x    87 vermaden  vermaden    87B 2017.12.18 15:30 books/
drwxr-xr-x    12 vermaden  vermaden    12B 2018.06.19 16:02 download/
drwxr-xr-x    19 vermaden  vermaden    20B 2018.05.24 11:52 gfx/
drwx------    12 vermaden  vermaden   310B 2018.07.07 03:23 scripts/

You may show that command by pressing [Up] key to check what has been executed.

% ls -l -h

Edit or Just View

When working in multi-admin environment – especially while debugging – one admin may block other admin’s work by using vi(1) – or just their favorite editor to ‘browse’ the file contents. Good practice in that case is using more(1) or less(1) instead of vi(1), but that frustrates some admins to type vi(1) again if they need to change something.

… and by the way, on FreeBSD more(1) is less(1) πŸ™‚

% uname -spr
FreeBSD 11.2-RELEASE amd64

% ls -i `which less` `which more`
492318 /usr/bin/less  492318 /usr/bin/more

A blocked ‘example’ is shown below when the second admin wanted to browse the /etc/rc.conf file while the first one already did that.

# vim /etc/rc.conf

E325: ATTENTION
Found a swap file by the name "/etc/.rc.conf.swp"
          owned by: root   dated: Sun Jul  8 08:38:35 2018
         file name: /etc/rc.conf
          modified: no
         user name: root   host name: t420s.local
        process ID: 54219 (still running)
While opening file "/etc/rc.conf"
             dated: Fri Jul  6 00:51:11 2018

(1) Another program may be editing the same file.  If this is the case,
    be careful not to end up with two different instances of the same
    file when making changes.  Quit, or continue with caution.
(2) An edit session for this file crashed.
    If this is the case, use ":recover" or "vim -r /etc/rc.conf"
    to recover the changes (see ":help recovery").
    If you did this already, delete the swap file "/etc/.rc.conf.swp"
    to avoid this message.

Swap file "/etc/.rc.conf.swp" already exists!
[O]pen Read-Only, (E)dit anyway, (R)ecover, (Q)uit, (A)bort:

This is where less(1) comes handy because of you open a file in it, you do not ‘block’ access to it and if you need to edit something just hi [V] key (small ‘v’ letter). It will open that file in your ${EDITOR} editor and you can make any changes now.

Reset

Last but not least, often when you paste ‘too much’ into the terminal it becomes ‘fragile’ or ‘broken’. To reset it into the ‘stable’ and ‘proper’ state just use the reset(1) command.

% reset

Hope You find it useful, see you at the Part 3 sometime πŸ˜‰

EOF

Ghost in the Shell – Part 1

I wanted to post this earlier, but the busy daily life does not help πŸ˜‰

This will be first article in the series about efficient working in the shell environment. There are actually a lot articles and blog posts about efficient working in the terminal, but a lot of them are biased towards very specific uses, like hints only for Bash shell or only for specific terminal emulator. For example Moving efficiently in the CLI.

These series are about universal knowledge that would work on most shells and environments. Lets start with hint that I use many times a day that saves a lot time for not having to type …

You may want to check other articles in the Ghost in the Shell series on the Ghost in the Shell – Global Page where you will find links to all episodes of the series along with table of contents for each episode’s contents.

Recall Last Argument of Previous Command

Imagine most simple scenario, creating directory and entering it. Typically its like that:

% mkdir clear-place-for-new-work
% cd clear-place-for-new-work
%

The longer the name, the bigger the chance that You would type mkdir, then hit the [UP] arrow, then [HOME] or [CTRL]+[A] keys and then put cd in the place of mkdir.

With the use of !$ You can recall last argument of the precious command, so it will now look like that.

% mkdir clear-place-for-new-work
% cd !$
cd clear-place-for-new-work
%

Faster isn’t it?

Swap First Occurrence of a Word

The upper example can be used for the next advice as well. By typing ^fromwhat^towhat in the terminal You will swap the first occurrence of word fromwhat word to towhat word in the previous command, lets see how its working.

% mkdir clear-place-for-new-work
% ^mkdir^cd
cd clear-place-for-new-work
%

It still takes more time to write then using the !$ so its useful mostly when there are short things to swap, like numbers, for example ^3^4 to ‘move’ from one target to another. … or also if You can not recall to the last argument of previous command.

There and Back Again

A lot people does not know, that You can go back to previous working directory with dash. Lets assume that You need to get to /tmp directory for one command and get back to where You were to continue the work. Here is an example.

% pwd
/usr/local/etc/bareos/bareos-dir.d/jobdefs
% cd /tmp
% pwd
/tmp% (do needed work in /tmp dir)
% cd -
/usr/local/etc/bareos/bareos-dir.d/jobdefs
% pwd
/usr/local/etc/bareos/bareos-dir.d/jobdefs

You can even create entire directory stack with pushd/popd commands if needed, check Wikipedia article on that for more information. You can also use ${OLDPWD} variable. Useful with umount command for example.

% pwd
/media/backup-pendrive-key
% cd ~
% umount $OLDPWD
% pwd
/home/vermaden

Repeat Command from History

With exclamation mark (!) You can re-invoke the command from history with all its arguments (which sometimes can be risky). For example.

% !pkg
pkg update -f
(runs actual command)
%

Its better to first check what arguments have been used in that command, that is where :p comes handy. Here is its example usage.

% !pkg:p
pkg update -f
(just prints command without running it)
% !pkg
pkg update -f
(runs actual command)
%

Now, as arguments are known its safe to re-invoke the command with arguments. When this can be dangerous? Can ls command can be dangerous, that depends what You have on Your history, check the example below.

% ls | while read I; do rm -f ${I}; done

This command first lists the contents of the current working directory with ls command, then the output is piped to the while loop which invokes rm -f command for each item listed by ls command, which efficiently removes all non-hidden files in current working directory … which probably is not what we mean by typing !ls on the command prompt ;). That is why its valuable to first check what arguments were used with !ls:p syntax.

Enough for now, I will write more parts with more hints on how to efficiently work in the shell/terminal environment.

UPDATE 1

The Ghost in the Shell – Part 1 article was included in the BSD Now 241 – Bowling in the LimeLight episode.

Thanks for mentioning!

UPDATE 2

About Recall last argument of previous command section … there is also $_ that does similar thing as !$ but there is little difference. The !$ is ‘line oriented’ while $_ is ‘previous command oriented’. Below is an example that shows the difference in the behavior.

The $! takes value from last command in ‘previous line’ which means that '-l' value will be used from line 001 and not 'asd' from the current line 002 from previously executed command.

001 % ls -l
002 % echo asd; ls !$ | tail -2
echo asd; ls -l | tail -2
asd
// ls output //

The $_ takes value from last executed command, thus it points at 'asd' used on line 002 and not at '-l' used at previous 001 line.

001 % ls -l
002 % echo asd; ls $_ | tail -2
asd
ls: asd: No such file or directory

On BASH shell there is also [ALT]-[.] shortcut that switches between $! from previous lines. To achieve the same shortcut on ZSH use this line below in ZSH config.

bindkey '\e.' insert-last-word

Thank you Zachery Purnell for pointing that out.

EOF