16

I'm trying to write a script for work to automate some reporting on an output. The Log files are (currently, it's being 'standardise' in the future) stored in this sort of path structure:

/<root_path>/<process_one_path>/logs/<time_date_stamp>/<specific_log_file>

/<root_path>/<process_two_path>/logs/<different_time_date_stamp>/<specific_log_file>

Every part of the path is known except the time date stamps, which are always the latest in the folder.

If I try to use a wild card in place of the time date stamp, I get multiple results, e.g.:

> ls /<root_path>/<process_two_path>/logs/* [tab]
20130102-175103
20130118-090859
20130305-213506

I only want it to return the latest one, is this possible with Bash?

NB (I don't have zsh, and as lovely as it sounds I doubt we'll ever get it at work)

AncientSwordRage
  • 1,714
  • 1
  • 20
  • 26

5 Answers5

11

The following works in bash 4.2:

list=( /<root_path>/<process_two_path>/logs/* )
echo "${list[-1]}"

If your bash is an older version:

list=( /<root_path>/<process_two_path>/logs/* )
echo "${list[${#list[@]}-1]}"
Stéphane Chazelas
  • 522,931
  • 91
  • 1,010
  • 1,501
phemmer
  • 70,657
  • 19
  • 188
  • 223
11

POSIXly:

 set -- /<root_path>/<process_two_path>/logs/*
 shift "$(($# - 1))"
 printf '%s\n' "$1"

Since you mention zsh:

 print -r /<root_path>/<process_two_path>/logs/*([-1])

If there's no non-hidden file in /<root_path>/<process_two_path>/logs, the POSIX one will output a literal /<root_path>/<process_two_path>/logs/* while zsh will return an error and abort the shell process as if it was a syntax error¹. You can always handle the error in an always block, or use the N (for nullglob) glob qualifier and handle the case by hand yourself.


¹ If the glob is in an argument to an external command (which print isn't as it's a zsh builtin), that only exits the process that would have run the command so it just appears as cancelling the command. In interactive shells, it returns to the prompt rather than exiting the shell, same as for other syntax errors.

Stéphane Chazelas
  • 522,931
  • 91
  • 1,010
  • 1,501
5

You can (mis)use a for loop. Like this:

for filename in <logdir>/*; do :; done; echo "$filename"

This works because the loop variable filename still contains the last value after the loop has terminated. Also because bash does glob expansion sorted alphabetically. Read the bash manual for more information. Note that this depends on LC_COLLATE.

This might not be the most efficient because bash still has to loop over all the files just to do nothing. But for reasonable number of files it should still be fast enough. If it is no longer fast enough then the number of files has become unreasonable and you should consider a different approach.

If you want the filename with the oldest timestamp (the one sorted first) you can do it like this:

for filename in <logdir>/*; do echo "$filename"; break; done

or this

for filename in <logdir>/*; do break; done; echo "$filename"

which is effectively the same (the first filename is printed) but depending on your code and style preferences you might prefer one or the other.

Lesmana
  • 26,889
  • 20
  • 81
  • 86
1

-1 is the default in pipelines, and ls output should already be sorted:

ls | tail -n1

If others were looking for how to insert the first or last result interactively, you can bind menu-complete or menu-complete-backward in .inputrc:

"\e\t": menu-complete
"\e[Z": menu-complete-backward # shift-tab

If show-all-if-ambiguous is enabled, set completion-query-items 0 removes the prompt when there are 101 or more results and set page-completions off disables the pager.

Lri
  • 5,143
  • 2
  • 27
  • 20
  • 1
    Parsing `ls` is generally a bad idea -- filenames can contain all sorts of weird characters, including anything `ls` is able to use as a separator. – Gavin S. Yancey Sep 11 '20 at 21:55
-1
ls -tr | tail -1

Should do the trick.

rahmu
  • 19,673
  • 28
  • 87
  • 128
slipset
  • 99
  • 2