Bash - Looping through Array in Nested [FOR, WHILE, IF] statements The 2019 Stack Overflow Developer Survey Results Are InWhy is using a shell loop to process text considered bad practice?How do I test if an item is in a bash array?How to name a file in the deepest level of a directory treeBash: Looping through a stringBASH: looping through lsBash script for looping through filesload bash comands from file one per line and execute them for each file in a directoryBASH attempting to leave nested statements/loops/functionsLooping through lines in several files (bash)Bash - Looping through nested for loop using arraysLooping through an array with for gives different resultsError in Bash Script Nested Conditional Statements

Can you compress metal and what would be the consequences?

Delete all lines which don't have n characters before delimiter

How to manage monthly salary

What does ひと匙 mean in this manga and has it been used colloquially?

What are the motivations for publishing new editions of an existing textbook, beyond new discoveries in a field?

Where to refill my bottle in India?

Why isn't airport relocation done gradually?

Geography at the pixel level

Is an up-to-date browser secure on an out-of-date OS?

What do hard-Brexiteers want with respect to the Irish border?

Why do UK politicians seemingly ignore opinion polls on Brexit?

Shouldn't "much" here be used instead of "more"?

Falsification in Math vs Science

Lightning Grid - Columns and Rows?

How to deal with fear of taking dependencies

How can I autofill dates in Excel excluding Sunday?

Aging parents with no investments

"as much details as you can remember"

Why didn't the Event Horizon Telescope team mention Sagittarius A*?

Is this app Icon Browser Safe/Legit?

Are spiders unable to hurt humans, especially very small spiders?

Can a flute soloist sit?

Why can Shazam fly?

Have you ever entered Singapore using a different passport or name?



Bash - Looping through Array in Nested [FOR, WHILE, IF] statements



The 2019 Stack Overflow Developer Survey Results Are InWhy is using a shell loop to process text considered bad practice?How do I test if an item is in a bash array?How to name a file in the deepest level of a directory treeBash: Looping through a stringBASH: looping through lsBash script for looping through filesload bash comands from file one per line and execute them for each file in a directoryBASH attempting to leave nested statements/loops/functionsLooping through lines in several files (bash)Bash - Looping through nested for loop using arraysLooping through an array with for gives different resultsError in Bash Script Nested Conditional Statements



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








3















I am trying to process a large file-set, appending specific lines into the "test_result.txt" file - I achieved it -not very elegantly- with the following code.



for i in *merged; do
while read -r lo; do
if [[ $lo == *"ID"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Instance"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"NOT"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"AI"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Sitting"* ]]; then
echo $lo >> test_result.txt

done < $i
done


However, I am trying to size-it-down using an array - which resulted in quite an unsuccessful attempt.



KEYWORDS=("ID" "Instance" "NOT" "AI" "Sitting" )
KEY_COUNT=0

for i in *merged; do
while read -r lo; do
if [[$lo == $KEYWORDS[@] ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done < $i
done









share|improve this question









New contributor




AF.BJ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.















  • 3





    How large is the file set? This sounds like an XY problem that could be better accomplished by a straightforward grep command.

    – steeldriver
    17 hours ago






  • 2





    Small side note: Instead of KEY_COUNT="`expr $KEY_COUNT + 1`" you could also write ((KEY_COUNT++))

    – Freddy
    17 hours ago

















3















I am trying to process a large file-set, appending specific lines into the "test_result.txt" file - I achieved it -not very elegantly- with the following code.



for i in *merged; do
while read -r lo; do
if [[ $lo == *"ID"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Instance"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"NOT"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"AI"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Sitting"* ]]; then
echo $lo >> test_result.txt

done < $i
done


However, I am trying to size-it-down using an array - which resulted in quite an unsuccessful attempt.



KEYWORDS=("ID" "Instance" "NOT" "AI" "Sitting" )
KEY_COUNT=0

for i in *merged; do
while read -r lo; do
if [[$lo == $KEYWORDS[@] ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done < $i
done









share|improve this question









New contributor




AF.BJ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.















  • 3





    How large is the file set? This sounds like an XY problem that could be better accomplished by a straightforward grep command.

    – steeldriver
    17 hours ago






  • 2





    Small side note: Instead of KEY_COUNT="`expr $KEY_COUNT + 1`" you could also write ((KEY_COUNT++))

    – Freddy
    17 hours ago













3












3








3








I am trying to process a large file-set, appending specific lines into the "test_result.txt" file - I achieved it -not very elegantly- with the following code.



for i in *merged; do
while read -r lo; do
if [[ $lo == *"ID"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Instance"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"NOT"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"AI"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Sitting"* ]]; then
echo $lo >> test_result.txt

done < $i
done


However, I am trying to size-it-down using an array - which resulted in quite an unsuccessful attempt.



KEYWORDS=("ID" "Instance" "NOT" "AI" "Sitting" )
KEY_COUNT=0

for i in *merged; do
while read -r lo; do
if [[$lo == $KEYWORDS[@] ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done < $i
done









share|improve this question









New contributor




AF.BJ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












I am trying to process a large file-set, appending specific lines into the "test_result.txt" file - I achieved it -not very elegantly- with the following code.



for i in *merged; do
while read -r lo; do
if [[ $lo == *"ID"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Instance"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"NOT"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"AI"* ]]; then
echo $lo >> test_result.txt
fi
if [[ $lo == *"Sitting"* ]]; then
echo $lo >> test_result.txt

done < $i
done


However, I am trying to size-it-down using an array - which resulted in quite an unsuccessful attempt.



KEYWORDS=("ID" "Instance" "NOT" "AI" "Sitting" )
KEY_COUNT=0

for i in *merged; do
while read -r lo; do
if [[$lo == $KEYWORDS[@] ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done < $i
done






bash






share|improve this question









New contributor




AF.BJ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




AF.BJ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited 5 hours ago









Rui F Ribeiro

42k1483142




42k1483142






New contributor




AF.BJ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked 18 hours ago









AF.BJAF.BJ

164




164




New contributor




AF.BJ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





AF.BJ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






AF.BJ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







  • 3





    How large is the file set? This sounds like an XY problem that could be better accomplished by a straightforward grep command.

    – steeldriver
    17 hours ago






  • 2





    Small side note: Instead of KEY_COUNT="`expr $KEY_COUNT + 1`" you could also write ((KEY_COUNT++))

    – Freddy
    17 hours ago












  • 3





    How large is the file set? This sounds like an XY problem that could be better accomplished by a straightforward grep command.

    – steeldriver
    17 hours ago






  • 2





    Small side note: Instead of KEY_COUNT="`expr $KEY_COUNT + 1`" you could also write ((KEY_COUNT++))

    – Freddy
    17 hours ago







3




3





How large is the file set? This sounds like an XY problem that could be better accomplished by a straightforward grep command.

– steeldriver
17 hours ago





How large is the file set? This sounds like an XY problem that could be better accomplished by a straightforward grep command.

– steeldriver
17 hours ago




2




2





Small side note: Instead of KEY_COUNT="`expr $KEY_COUNT + 1`" you could also write ((KEY_COUNT++))

– Freddy
17 hours ago





Small side note: Instead of KEY_COUNT="`expr $KEY_COUNT + 1`" you could also write ((KEY_COUNT++))

– Freddy
17 hours ago










2 Answers
2






active

oldest

votes


















5














It looks like you want to get all the lines that contains at least one out of a set of words, from a set of files.



Assuming that you don't have many thousands of files, you could do that with a single grep command:



grep -wE '(ID|Instance|NOT|AI|Sitting)' ./*merged >outputfile


This would extract the lines matching any of the words listed in the pattern from the files whose names matches *merged.



The -w with grep ensures that the given strings are not matched as substrings (i.e. NOT will not be matched in NOTICE). The -E option enables the alternation with | in the pattern.



Add the -h option to the command if you don't want the names of the files containing matching lines in the output.



If you do have many thousands of files, the above command may fail due to expanding to a too long command line. In that case, you may want to do something like



for file in ./*merged; do
grep -wE '(ID|Instance|NOT|AI|Sitting)' "$file"
done >outputfile


which would run the grep command once on each file, or,



find . -maxdepth 1 -type f -name '*merged' 
-exec grep -wE '(ID|Instance|NOT|AI|Sitting)' + >outputfile


which would do as few invocations of grep as possible with as many files as possible at once.



Related:



  • Why is using a shell loop to process text considered bad practice?





share|improve this answer




















  • 1





    It is indeed a file-set of a few thousand. Originally, I built other processes into the loop but running grep separately - before the extra tweakings - it's a cleaner solution. Just needed to add the "-h" option to suppress default prefixes - Thnks.

    – AF.BJ
    11 hours ago











  • @AF.BJ since this answer solved your problem, consider accepting it: What should I do when someone answers my question?

    – muru
    2 hours ago


















3














Adding an array doesn't particularly help: you still would need to loop over the elements of the array (see How do I test if an item is in a bash array?):



while read -r lo; do
for keyword in "$keywords[@]"; do
if [[ $lo == *$keyword* ]]; then
echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
fi
done
done < "$i"


It might be better to use a case statement:



while read -r lo; do
case $lo in
*(ID|Instance|NOT|AI|Sitting)*)
echo "$lo" >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
;;
esac
done < "$i"


(I assume you do further processing of these lines within the loop. If not, grep or awk could do this more efficiently.)






share|improve this answer























    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "106"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );






    AF.BJ is a new contributor. Be nice, and check out our Code of Conduct.









    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f511636%2fbash-looping-through-array-in-nested-for-while-if-statements%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    5














    It looks like you want to get all the lines that contains at least one out of a set of words, from a set of files.



    Assuming that you don't have many thousands of files, you could do that with a single grep command:



    grep -wE '(ID|Instance|NOT|AI|Sitting)' ./*merged >outputfile


    This would extract the lines matching any of the words listed in the pattern from the files whose names matches *merged.



    The -w with grep ensures that the given strings are not matched as substrings (i.e. NOT will not be matched in NOTICE). The -E option enables the alternation with | in the pattern.



    Add the -h option to the command if you don't want the names of the files containing matching lines in the output.



    If you do have many thousands of files, the above command may fail due to expanding to a too long command line. In that case, you may want to do something like



    for file in ./*merged; do
    grep -wE '(ID|Instance|NOT|AI|Sitting)' "$file"
    done >outputfile


    which would run the grep command once on each file, or,



    find . -maxdepth 1 -type f -name '*merged' 
    -exec grep -wE '(ID|Instance|NOT|AI|Sitting)' + >outputfile


    which would do as few invocations of grep as possible with as many files as possible at once.



    Related:



    • Why is using a shell loop to process text considered bad practice?





    share|improve this answer




















    • 1





      It is indeed a file-set of a few thousand. Originally, I built other processes into the loop but running grep separately - before the extra tweakings - it's a cleaner solution. Just needed to add the "-h" option to suppress default prefixes - Thnks.

      – AF.BJ
      11 hours ago











    • @AF.BJ since this answer solved your problem, consider accepting it: What should I do when someone answers my question?

      – muru
      2 hours ago















    5














    It looks like you want to get all the lines that contains at least one out of a set of words, from a set of files.



    Assuming that you don't have many thousands of files, you could do that with a single grep command:



    grep -wE '(ID|Instance|NOT|AI|Sitting)' ./*merged >outputfile


    This would extract the lines matching any of the words listed in the pattern from the files whose names matches *merged.



    The -w with grep ensures that the given strings are not matched as substrings (i.e. NOT will not be matched in NOTICE). The -E option enables the alternation with | in the pattern.



    Add the -h option to the command if you don't want the names of the files containing matching lines in the output.



    If you do have many thousands of files, the above command may fail due to expanding to a too long command line. In that case, you may want to do something like



    for file in ./*merged; do
    grep -wE '(ID|Instance|NOT|AI|Sitting)' "$file"
    done >outputfile


    which would run the grep command once on each file, or,



    find . -maxdepth 1 -type f -name '*merged' 
    -exec grep -wE '(ID|Instance|NOT|AI|Sitting)' + >outputfile


    which would do as few invocations of grep as possible with as many files as possible at once.



    Related:



    • Why is using a shell loop to process text considered bad practice?





    share|improve this answer




















    • 1





      It is indeed a file-set of a few thousand. Originally, I built other processes into the loop but running grep separately - before the extra tweakings - it's a cleaner solution. Just needed to add the "-h" option to suppress default prefixes - Thnks.

      – AF.BJ
      11 hours ago











    • @AF.BJ since this answer solved your problem, consider accepting it: What should I do when someone answers my question?

      – muru
      2 hours ago













    5












    5








    5







    It looks like you want to get all the lines that contains at least one out of a set of words, from a set of files.



    Assuming that you don't have many thousands of files, you could do that with a single grep command:



    grep -wE '(ID|Instance|NOT|AI|Sitting)' ./*merged >outputfile


    This would extract the lines matching any of the words listed in the pattern from the files whose names matches *merged.



    The -w with grep ensures that the given strings are not matched as substrings (i.e. NOT will not be matched in NOTICE). The -E option enables the alternation with | in the pattern.



    Add the -h option to the command if you don't want the names of the files containing matching lines in the output.



    If you do have many thousands of files, the above command may fail due to expanding to a too long command line. In that case, you may want to do something like



    for file in ./*merged; do
    grep -wE '(ID|Instance|NOT|AI|Sitting)' "$file"
    done >outputfile


    which would run the grep command once on each file, or,



    find . -maxdepth 1 -type f -name '*merged' 
    -exec grep -wE '(ID|Instance|NOT|AI|Sitting)' + >outputfile


    which would do as few invocations of grep as possible with as many files as possible at once.



    Related:



    • Why is using a shell loop to process text considered bad practice?





    share|improve this answer















    It looks like you want to get all the lines that contains at least one out of a set of words, from a set of files.



    Assuming that you don't have many thousands of files, you could do that with a single grep command:



    grep -wE '(ID|Instance|NOT|AI|Sitting)' ./*merged >outputfile


    This would extract the lines matching any of the words listed in the pattern from the files whose names matches *merged.



    The -w with grep ensures that the given strings are not matched as substrings (i.e. NOT will not be matched in NOTICE). The -E option enables the alternation with | in the pattern.



    Add the -h option to the command if you don't want the names of the files containing matching lines in the output.



    If you do have many thousands of files, the above command may fail due to expanding to a too long command line. In that case, you may want to do something like



    for file in ./*merged; do
    grep -wE '(ID|Instance|NOT|AI|Sitting)' "$file"
    done >outputfile


    which would run the grep command once on each file, or,



    find . -maxdepth 1 -type f -name '*merged' 
    -exec grep -wE '(ID|Instance|NOT|AI|Sitting)' + >outputfile


    which would do as few invocations of grep as possible with as many files as possible at once.



    Related:



    • Why is using a shell loop to process text considered bad practice?






    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited 13 hours ago

























    answered 15 hours ago









    KusalanandaKusalananda

    141k17262438




    141k17262438







    • 1





      It is indeed a file-set of a few thousand. Originally, I built other processes into the loop but running grep separately - before the extra tweakings - it's a cleaner solution. Just needed to add the "-h" option to suppress default prefixes - Thnks.

      – AF.BJ
      11 hours ago











    • @AF.BJ since this answer solved your problem, consider accepting it: What should I do when someone answers my question?

      – muru
      2 hours ago












    • 1





      It is indeed a file-set of a few thousand. Originally, I built other processes into the loop but running grep separately - before the extra tweakings - it's a cleaner solution. Just needed to add the "-h" option to suppress default prefixes - Thnks.

      – AF.BJ
      11 hours ago











    • @AF.BJ since this answer solved your problem, consider accepting it: What should I do when someone answers my question?

      – muru
      2 hours ago







    1




    1





    It is indeed a file-set of a few thousand. Originally, I built other processes into the loop but running grep separately - before the extra tweakings - it's a cleaner solution. Just needed to add the "-h" option to suppress default prefixes - Thnks.

    – AF.BJ
    11 hours ago





    It is indeed a file-set of a few thousand. Originally, I built other processes into the loop but running grep separately - before the extra tweakings - it's a cleaner solution. Just needed to add the "-h" option to suppress default prefixes - Thnks.

    – AF.BJ
    11 hours ago













    @AF.BJ since this answer solved your problem, consider accepting it: What should I do when someone answers my question?

    – muru
    2 hours ago





    @AF.BJ since this answer solved your problem, consider accepting it: What should I do when someone answers my question?

    – muru
    2 hours ago













    3














    Adding an array doesn't particularly help: you still would need to loop over the elements of the array (see How do I test if an item is in a bash array?):



    while read -r lo; do
    for keyword in "$keywords[@]"; do
    if [[ $lo == *$keyword* ]]; then
    echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
    fi
    done
    done < "$i"


    It might be better to use a case statement:



    while read -r lo; do
    case $lo in
    *(ID|Instance|NOT|AI|Sitting)*)
    echo "$lo" >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
    ;;
    esac
    done < "$i"


    (I assume you do further processing of these lines within the loop. If not, grep or awk could do this more efficiently.)






    share|improve this answer



























      3














      Adding an array doesn't particularly help: you still would need to loop over the elements of the array (see How do I test if an item is in a bash array?):



      while read -r lo; do
      for keyword in "$keywords[@]"; do
      if [[ $lo == *$keyword* ]]; then
      echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
      fi
      done
      done < "$i"


      It might be better to use a case statement:



      while read -r lo; do
      case $lo in
      *(ID|Instance|NOT|AI|Sitting)*)
      echo "$lo" >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
      ;;
      esac
      done < "$i"


      (I assume you do further processing of these lines within the loop. If not, grep or awk could do this more efficiently.)






      share|improve this answer

























        3












        3








        3







        Adding an array doesn't particularly help: you still would need to loop over the elements of the array (see How do I test if an item is in a bash array?):



        while read -r lo; do
        for keyword in "$keywords[@]"; do
        if [[ $lo == *$keyword* ]]; then
        echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
        fi
        done
        done < "$i"


        It might be better to use a case statement:



        while read -r lo; do
        case $lo in
        *(ID|Instance|NOT|AI|Sitting)*)
        echo "$lo" >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
        ;;
        esac
        done < "$i"


        (I assume you do further processing of these lines within the loop. If not, grep or awk could do this more efficiently.)






        share|improve this answer













        Adding an array doesn't particularly help: you still would need to loop over the elements of the array (see How do I test if an item is in a bash array?):



        while read -r lo; do
        for keyword in "$keywords[@]"; do
        if [[ $lo == *$keyword* ]]; then
        echo $lo >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
        fi
        done
        done < "$i"


        It might be better to use a case statement:



        while read -r lo; do
        case $lo in
        *(ID|Instance|NOT|AI|Sitting)*)
        echo "$lo" >> ~/Desktop/test_result.txt && KEY_COUNT="`expr $KEY_COUNT + 1`"
        ;;
        esac
        done < "$i"


        (I assume you do further processing of these lines within the loop. If not, grep or awk could do this more efficiently.)







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered 17 hours ago









        murumuru

        37.4k589164




        37.4k589164




















            AF.BJ is a new contributor. Be nice, and check out our Code of Conduct.









            draft saved

            draft discarded


















            AF.BJ is a new contributor. Be nice, and check out our Code of Conduct.












            AF.BJ is a new contributor. Be nice, and check out our Code of Conduct.











            AF.BJ is a new contributor. Be nice, and check out our Code of Conduct.














            Thanks for contributing an answer to Unix & Linux Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f511636%2fbash-looping-through-array-in-nested-for-while-if-statements%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Куамањотепек (Чилапа де Алварез) Садржај Становништво Види још Референце Спољашње везе Мени за навигацију17°19′47″N 99°1′51″W / 17.32972° СГШ; 99.03083° ЗГД / 17.32972; -99.0308317°19′47″N 99°1′51″W / 17.32972° СГШ; 99.03083° ЗГД / 17.32972; -99.030838877656„Instituto Nacional de Estadística y Geografía”„The GeoNames geographical database”Мексичка насељапроширитиуу

            How to make RAID controller rescan devices The 2019 Stack Overflow Developer Survey Results Are InLSI MegaRAID SAS 9261-8i: Disk isn't recognized after replacementHow to monitor the hard disk status behind Dell PERC H710 Raid Controller with CentOS 6?LSI MegaRAID - Recreate missing RAID 1 arrayext. 2-bay USB-Drive with RAID: btrfs RAID vs built-in RAIDInvalid SAS topologyDoes enabling JBOD mode on LSI based controllers affect existing logical disks/arrays?Why is there a shift between the WWN reported from the controller and the Linux system?Optimal RAID 6+0 Setup for 40+ 4TB DisksAccidental SAS cable removal

            Срби Садржај Географија Етимологија Генетика Историја Језик Религија Популација Познати Срби Види још Напомене Референце Извори Литература Спољашње везе Мени за навигацијууrs.one.un.orgАрхивираноАрхивирано из оригиналаПопис становништва из 2011. годинеCOMMUNITY PROFILE: SERB COMMUNITY„1996 population census in Bosnia and Herzegovina”„CIA - The World Factbook - Bosnia and Herzegovina”American FactFinder - Results„2011 National Household Survey: Data tables”„Srbi u Nemačkoj | Srbi u Njemačkoj | Zentralrat der Serben in Deutschland”оригинала„Vesti online - Srpski informativni portal”„The Serbian Diaspora and Youth: Cross-Border Ties and Opportunities for Development”оригиналаSerben-Demo eskaliert in Wien„The People of Australia – Statistics from the 2011 Census”„Erstmals über eine Million EU- und EFTA Angehörige in der Schweiz”STANOVNIŠTVO PREMA NARODNOSTI – DETALJNA KLASIFIKACIJA – POPIS 2011.(Завод за статистику Црне Горе)title=Présentation de la République de SerbieSerbian | EthnologuePopulation by ethnic affiliation, Slovenia, Census 1953, 1961, 1971, 1981, 1991 and 2002Попис на населението, домаќинствата и становите во Република Македонија, 2002: Дефинитивни податоциALBANIJA ETNIČKI ČISTI SRBE: Iščezlo 100.000 ljudi pokrštavanjem, kao što su to radile ustaše u NDH! | Telegraf – Najnovije vestiИз удаљене Аргентине„Tab11. Populaţia stabilă după etnie şi limba maternă, pe categorii de localităţi”Суседи броје Србе„Srpska Dijaspora”оригиналаMinifacts about Norway 2012„Statistiques - 01.06.2008”ПРЕДСЕДНИК СРБИЈЕ СА СРБИМА У БРАТИСЛАВИСлавка Драшковић: Многа питања Срба у Црној Гори нерешенаThe Spread of the SlavesGoogle Book„Distribution of European Y-chromosome DNA (Y-DNA) haplogroups by country in percentage”American Journal of Physical Anthropology 142:380–390 (2010)„Архивирана копија”оригинала„Haplogroup I2 (Y-DNA)”„Архивирана копија”оригиналаVTS 01 1 - YouTubeПрви сукоби Срба и Турака - Политикин забавникАрхивираноConstantine Porphyrogenitus: De Administrando ImperioВизантиски извори за историју народа ЈугославијеDe conversione Croatorum et Serborum: A Lost SourceDe conversione Croatorum et Serborum: Изгубљени извор Константина ПорфирогенитаИсторија српске државностиИсторија српског народаСрбофобија и њени извориСерска област после Душанове смртиИсторија ВизантијеИсторија средњовековне босанске државеСрби међу европским народимаСрби у средњем векуМедијиПодациууууу00577267