小编典典

从bash中的文件计算单词出现

linux

对于这个新手问题,我感到很抱歉,但是我对bash编程还是有点陌生(从几天前开始)。基本上我想做的是保留一个文件,并保留另一个文件的所有单词出现

我知道我可以这样做:

sort | uniq -c | sort

事情是,在那之后,我想获取第二个文件,再次计算出现次数并更新第一个文件。之后,我取第三份文件,依此类推。

我在做什么,此刻工作没有任何问题(我使用grepsedawk),但它看起来相当缓慢。

我很确定使用uniq,仅使用命令就可以有一种非常有效的方法,但是我不知道。

你能带我走正确的路吗?

我还粘贴了我编写的代码:

#!/bin/bash
#   count the number of word occurrences from a file and writes to another file #
#   the words are listed from the most frequent to the less one                 #

touch .check                # used to check the occurrances. Temporary file
touch distribution.txt      # final file with all the occurrences calculated

page=$1             # contains the file I'm calculating
occurrences=$2          # temporary file for the occurrences

# takes all the words from the file $page and orders them by occurrences
cat $page | tr -cs A-Za-z\' '\n'| tr A-Z a-z > .check

# loop to update the old file with the new information
# basically what I do is check word by word and add them to the old file as an update
cat .check | while read words
do
    word=${words}       # word I'm calculating
    strlen=${#word}     # word's length
    # I use a black list to not calculate banned words (for example very small ones or inunfluent words, like articles and prepositions
    if ! grep -Fxq $word .blacklist && [ $strlen -gt 2 ]
    then
        # if the word was never found before it writes it with 1 occurrence
        if [ `egrep -c -i "^$word: " $occurrences` -eq 0 ]
        then
            echo "$word: 1" | cat >> $occurrences
        # else it calculates the occurrences
        else
            old=`awk -v words=$word -F": " '$1==words { print $2 }' $occurrences`
            let "new=old+1"
            sed -i "s/^$word: $old$/$word: $new/g" $occurrences
        fi
    fi
done

rm .check

# finally it orders the words
awk -F": " '{print $2" "$1}' $occurrences | sort -rn | awk -F" " '{print $2": "$1}' > distribution.txt

阅读 246

收藏
2020-06-07

共1个答案

小编典典

好吧,我不确定您要尝试执行的操作是否正确,但是我可以这样进行:

while read file
do
  cat $file | tr -cs A-Za-z\' '\n'| tr A-Z a-z | sort | uniq -c > stat.$file
done < file-list

现在,您有了所有文件的统计信息,现在可以简单地对其进行汇总:

while read file
do
  cat stat.$file
done < file-list \
| sort -k2 \
| awk '{if ($2!=prev) {print s" "prev; s=0;}s+=$1;prev=$2;}END{print s" "prev;}'

用法示例:

$ for i in ls bash cp; do man $i > $i.txt ; done
$ cat <<EOF > file-list
> ls.txt
> bash.txt
> cp.txt
> EOF

$ while read file; do
> cat $file | tr -cs A-Za-z\' '\n'| tr A-Z a-z | sort | uniq -c > stat.$file
> done < file-list

$ while read file
> do
>   cat stat.$file
> done < file-list \
> | sort -k2 \
> | awk '{if ($2!=prev) {print s" "prev; s=0;}s+=$1;prev=$2;}END{print s" "prev;}' | sort -rn | head

3875 the
1671 is
1137 to
1118 a
1072 of
793 if
744 and
533 command
514 in
507 shell
2020-06-07