1

I have a text file as shown below:

A f1
B f2
A f3
B f4
B f5

I want to sort it based on first column values and keep it in separate files.

Desired output:

A.txt:

A f1 
A f3

B.txt:

B f2
B f4
B f5

Tried it with uniq but it’s not working. Edit: I am not able to post the desired way the txt file and output file should be, but it should one below in next lines

terdon
  • 234,489
  • 66
  • 447
  • 667
Jyoti Pal
  • 11
  • 1
  • `uniq` only recognises **adjacent** duplicates. `sort -u` may be needed to recognise duplicates, for further processing... – Jeremy Boden Jul 05 '21 at 20:25
  • Are there any dulicates in the input that you want to eliminate with `uniq`? If yes, please [edit] your question and extend your example input to show at least one duplicate (and if necessary adjust the expected output). In this case you could state that your question is not fully answered by the similar questions. – Bodo Jul 13 '21 at 17:40

3 Answers3

3

In most cases, you can simply:

awk '{print > $1 ".txt"}' file
JJoao
  • 11,887
  • 1
  • 22
  • 44
1

All you need is awk and not uniq.

awk ' $1 == "A" { print }' $file > something.txt` #This will print all A's

awk ' $1 == "B" { print }' $file > something.txt` #This will print all B's.

Note: $file will be the name of your input log.

EDIT: My original answer was based on the data provided. I have now updated it. As such, my original "All you need is grep" is no longer valid and changed to "awk" instead.

sseLtaH
  • 2,706
  • 1
  • 6
  • 19
0

A short awk script will do this job:

sort file | awk '
  $1 != prev {prev = $1; if (out) close(out); out = $1 ".txt"}
  {print > out}
'

Adding the close() to prevent "too many open files" errors.

glenn jackman
  • 84,176
  • 15
  • 116
  • 168