Is it correct to say the Neural Networks are an alternative way of performing Maximum Likelihood Estimation? if not, why? The 2019 Stack Overflow Developer Survey Results Are InCan we use MLE to estimate Neural Network weights?Are loss functions what define the identity of each supervised machine learning algorithm?What can we say about the likelihood function, besides using it in maximum likelihood estimation?Why is maximum likelihood estimation considered to be a frequentist techniqueMaximum Likelihood Estimation — why it is used despite being biased in many casesWhat is the objective of maximum likelihood estimation?Maximum Likelihood estimation and the Kalman filterWhy does Maximum Likelihood estimation maximizes probability density instead of probabilityWhy are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the errors are not normally distributed?the relationship between maximizing the likelihood and minimizing the cross-entropythe meaning of likelihood in maximum likelihood estimationHow to construct a cross-entropy loss for general regression targets?

What do I do when my TA workload is more than expected?

Worn-tile Scrabble

How much of the clove should I use when using big garlic heads?

Why don't hard Brexiteers insist on a hard border to prevent illegal immigration after Brexit?

Why doesn't UInt have a toDouble()?

Does HR tell a hiring manager about salary negotiations?

Old scifi movie from the 50s or 60s with men in solid red uniforms who interrogate a spy from the past

Will it cause any balance problems to have PCs level up and gain the benefits of a long rest mid-fight?

If my opponent casts Ultimate Price on my Phantasmal Bear, can I save it by casting Snap or Curfew?

Can a flute soloist sit?

Why isn't the circumferential light around the M87 black hole's event horizon symmetric?

Is it possible for absolutely everyone to attain enlightenment?

How can I add encounters in the Lost Mine of Phandelver campaign without giving PCs too much XP?

Why not take a picture of a closer black hole?

Is bread bad for ducks?

What could be the right powersource for 15 seconds lifespan disposable giant chainsaw?

What is the motivation for a law requiring 2 parties to consent for recording a conversation

Can you cast a spell on someone in the Ethereal Plane, if you are on the Material Plane and have the True Seeing spell active?

Geography at the pixel level

Is it a good practice to use a static variable in a Test Class and use that in the actual class instead of Test.isRunningTest()?

APIPA and LAN Broadcast Domain

Why are there uneven bright areas in this photo of black hole?

What is the meaning of Triage in Cybersec world?

What information about me do stores get via my credit card?



Is it correct to say the Neural Networks are an alternative way of performing Maximum Likelihood Estimation? if not, why?



The 2019 Stack Overflow Developer Survey Results Are InCan we use MLE to estimate Neural Network weights?Are loss functions what define the identity of each supervised machine learning algorithm?What can we say about the likelihood function, besides using it in maximum likelihood estimation?Why is maximum likelihood estimation considered to be a frequentist techniqueMaximum Likelihood Estimation — why it is used despite being biased in many casesWhat is the objective of maximum likelihood estimation?Maximum Likelihood estimation and the Kalman filterWhy does Maximum Likelihood estimation maximizes probability density instead of probabilityWhy are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the errors are not normally distributed?the relationship between maximizing the likelihood and minimizing the cross-entropythe meaning of likelihood in maximum likelihood estimationHow to construct a cross-entropy loss for general regression targets?



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








3












$begingroup$


We often say that minimizing the (negative) cross-entropy error is the same as maximizing the likelihood. So can we say that NN are just an alternative way of performing Maximum Likelihood Estimation? if not, why?










share|cite|improve this question







New contributor




aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$







  • 1




    $begingroup$
    Possible duplicate of Can we use MLE to estimate Neural Network weights?
    $endgroup$
    – Sycorax
    7 hours ago

















3












$begingroup$


We often say that minimizing the (negative) cross-entropy error is the same as maximizing the likelihood. So can we say that NN are just an alternative way of performing Maximum Likelihood Estimation? if not, why?










share|cite|improve this question







New contributor




aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$







  • 1




    $begingroup$
    Possible duplicate of Can we use MLE to estimate Neural Network weights?
    $endgroup$
    – Sycorax
    7 hours ago













3












3








3


2



$begingroup$


We often say that minimizing the (negative) cross-entropy error is the same as maximizing the likelihood. So can we say that NN are just an alternative way of performing Maximum Likelihood Estimation? if not, why?










share|cite|improve this question







New contributor




aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$




We often say that minimizing the (negative) cross-entropy error is the same as maximizing the likelihood. So can we say that NN are just an alternative way of performing Maximum Likelihood Estimation? if not, why?







neural-networks maximum-likelihood






share|cite|improve this question







New contributor




aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|cite|improve this question







New contributor




aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|cite|improve this question




share|cite|improve this question






New contributor




aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked 9 hours ago









aca06aca06

161




161




New contributor




aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







  • 1




    $begingroup$
    Possible duplicate of Can we use MLE to estimate Neural Network weights?
    $endgroup$
    – Sycorax
    7 hours ago












  • 1




    $begingroup$
    Possible duplicate of Can we use MLE to estimate Neural Network weights?
    $endgroup$
    – Sycorax
    7 hours ago







1




1




$begingroup$
Possible duplicate of Can we use MLE to estimate Neural Network weights?
$endgroup$
– Sycorax
7 hours ago




$begingroup$
Possible duplicate of Can we use MLE to estimate Neural Network weights?
$endgroup$
– Sycorax
7 hours ago










2 Answers
2






active

oldest

votes


















3












$begingroup$

In abstract terms, neural networks are models, or if you prefer, functions with unknown parameters, where we try to learn the parameter by minimizing loss function (not just cross entropy, there are many other possibilities). In general, minimizing loss is in most cases equivalent to maximizing some likelihood function, but as discussed in this thread, it's not that simple.



You cannot say that they are equivalent, because minimizing loss, or maximizing likelihood is a method of finding the parameters, while neural network is the function defined in terms of those parameters.






share|cite|improve this answer









$endgroup$








  • 1




    $begingroup$
    I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
    $endgroup$
    – Sycorax
    6 hours ago







  • 1




    $begingroup$
    @Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
    $endgroup$
    – Tim
    6 hours ago






  • 1




    $begingroup$
    What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
    $endgroup$
    – aca06
    6 hours ago






  • 2




    $begingroup$
    @aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
    $endgroup$
    – Tim
    6 hours ago


















0












$begingroup$

These are fairly orthogonal topics.



Neural networks are a type of model which has a very large number of parameters. Maximum Likelihood Estimation is a very common method for estimating parameters from a given model and data. Typically, a model will allow you to compute a likelihood function from a model, data and parameter values. Since we don't know what the actual parameter values are, one way of estimating them is to use the value that maximizes the given likelihood. Neural networks are our model, maximum likelihood estimation is one method for estimating the parameters of our model.



One slightly technical note is that often, Maximum Likelihood Estimation is not exactly used in Neural Networks. That is, there are a lot of regularization methods used that imply we're not actually maximizing a likelihood function. These include:



(1) Penalized maximum likelihood. This one is a bit of a cop-out, as it doesn't actually take too much effort to think of Penalized likelihoods as actually just a different likelihood (i.e., one with priors) that one is maximizing.



(2) Random drop out. In especially a lot of the newer architectures, parameter values will randomly be set to 0 during training. This procedure is more definitely outside the realm of maximum likelihood estimation.



(3) Early stopping. It's not the most popular method at all, but one way to prevent overfitting is just to stop the optimization algorithm before it converges. Again, this is technically not maximum likelihood estimation, it's really just an ad-hoc solution to overfitting.



(4) Bayesian methods, probably the most common alternative to Maximum Likelihood Estimation in the statistics world, are also used for estimating the parameter values of a neural network. However, this is often too computationally intensive for large networks.






share|cite|improve this answer











$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "65"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );






    aca06 is a new contributor. Be nice, and check out our Code of Conduct.









    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f402511%2fis-it-correct-to-say-the-neural-networks-are-an-alternative-way-of-performing-ma%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    3












    $begingroup$

    In abstract terms, neural networks are models, or if you prefer, functions with unknown parameters, where we try to learn the parameter by minimizing loss function (not just cross entropy, there are many other possibilities). In general, minimizing loss is in most cases equivalent to maximizing some likelihood function, but as discussed in this thread, it's not that simple.



    You cannot say that they are equivalent, because minimizing loss, or maximizing likelihood is a method of finding the parameters, while neural network is the function defined in terms of those parameters.






    share|cite|improve this answer









    $endgroup$








    • 1




      $begingroup$
      I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
      $endgroup$
      – Sycorax
      6 hours ago







    • 1




      $begingroup$
      @Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
      $endgroup$
      – Tim
      6 hours ago






    • 1




      $begingroup$
      What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
      $endgroup$
      – aca06
      6 hours ago






    • 2




      $begingroup$
      @aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
      $endgroup$
      – Tim
      6 hours ago















    3












    $begingroup$

    In abstract terms, neural networks are models, or if you prefer, functions with unknown parameters, where we try to learn the parameter by minimizing loss function (not just cross entropy, there are many other possibilities). In general, minimizing loss is in most cases equivalent to maximizing some likelihood function, but as discussed in this thread, it's not that simple.



    You cannot say that they are equivalent, because minimizing loss, or maximizing likelihood is a method of finding the parameters, while neural network is the function defined in terms of those parameters.






    share|cite|improve this answer









    $endgroup$








    • 1




      $begingroup$
      I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
      $endgroup$
      – Sycorax
      6 hours ago







    • 1




      $begingroup$
      @Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
      $endgroup$
      – Tim
      6 hours ago






    • 1




      $begingroup$
      What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
      $endgroup$
      – aca06
      6 hours ago






    • 2




      $begingroup$
      @aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
      $endgroup$
      – Tim
      6 hours ago













    3












    3








    3





    $begingroup$

    In abstract terms, neural networks are models, or if you prefer, functions with unknown parameters, where we try to learn the parameter by minimizing loss function (not just cross entropy, there are many other possibilities). In general, minimizing loss is in most cases equivalent to maximizing some likelihood function, but as discussed in this thread, it's not that simple.



    You cannot say that they are equivalent, because minimizing loss, or maximizing likelihood is a method of finding the parameters, while neural network is the function defined in terms of those parameters.






    share|cite|improve this answer









    $endgroup$



    In abstract terms, neural networks are models, or if you prefer, functions with unknown parameters, where we try to learn the parameter by minimizing loss function (not just cross entropy, there are many other possibilities). In general, minimizing loss is in most cases equivalent to maximizing some likelihood function, but as discussed in this thread, it's not that simple.



    You cannot say that they are equivalent, because minimizing loss, or maximizing likelihood is a method of finding the parameters, while neural network is the function defined in terms of those parameters.







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered 6 hours ago









    TimTim

    60k9133229




    60k9133229







    • 1




      $begingroup$
      I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
      $endgroup$
      – Sycorax
      6 hours ago







    • 1




      $begingroup$
      @Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
      $endgroup$
      – Tim
      6 hours ago






    • 1




      $begingroup$
      What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
      $endgroup$
      – aca06
      6 hours ago






    • 2




      $begingroup$
      @aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
      $endgroup$
      – Tim
      6 hours ago












    • 1




      $begingroup$
      I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
      $endgroup$
      – Sycorax
      6 hours ago







    • 1




      $begingroup$
      @Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
      $endgroup$
      – Tim
      6 hours ago






    • 1




      $begingroup$
      What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
      $endgroup$
      – aca06
      6 hours ago






    • 2




      $begingroup$
      @aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
      $endgroup$
      – Tim
      6 hours ago







    1




    1




    $begingroup$
    I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
    $endgroup$
    – Sycorax
    6 hours ago





    $begingroup$
    I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
    $endgroup$
    – Sycorax
    6 hours ago





    1




    1




    $begingroup$
    @Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
    $endgroup$
    – Tim
    6 hours ago




    $begingroup$
    @Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
    $endgroup$
    – Tim
    6 hours ago




    1




    1




    $begingroup$
    What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
    $endgroup$
    – aca06
    6 hours ago




    $begingroup$
    What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
    $endgroup$
    – aca06
    6 hours ago




    2




    2




    $begingroup$
    @aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
    $endgroup$
    – Tim
    6 hours ago




    $begingroup$
    @aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
    $endgroup$
    – Tim
    6 hours ago













    0












    $begingroup$

    These are fairly orthogonal topics.



    Neural networks are a type of model which has a very large number of parameters. Maximum Likelihood Estimation is a very common method for estimating parameters from a given model and data. Typically, a model will allow you to compute a likelihood function from a model, data and parameter values. Since we don't know what the actual parameter values are, one way of estimating them is to use the value that maximizes the given likelihood. Neural networks are our model, maximum likelihood estimation is one method for estimating the parameters of our model.



    One slightly technical note is that often, Maximum Likelihood Estimation is not exactly used in Neural Networks. That is, there are a lot of regularization methods used that imply we're not actually maximizing a likelihood function. These include:



    (1) Penalized maximum likelihood. This one is a bit of a cop-out, as it doesn't actually take too much effort to think of Penalized likelihoods as actually just a different likelihood (i.e., one with priors) that one is maximizing.



    (2) Random drop out. In especially a lot of the newer architectures, parameter values will randomly be set to 0 during training. This procedure is more definitely outside the realm of maximum likelihood estimation.



    (3) Early stopping. It's not the most popular method at all, but one way to prevent overfitting is just to stop the optimization algorithm before it converges. Again, this is technically not maximum likelihood estimation, it's really just an ad-hoc solution to overfitting.



    (4) Bayesian methods, probably the most common alternative to Maximum Likelihood Estimation in the statistics world, are also used for estimating the parameter values of a neural network. However, this is often too computationally intensive for large networks.






    share|cite|improve this answer











    $endgroup$

















      0












      $begingroup$

      These are fairly orthogonal topics.



      Neural networks are a type of model which has a very large number of parameters. Maximum Likelihood Estimation is a very common method for estimating parameters from a given model and data. Typically, a model will allow you to compute a likelihood function from a model, data and parameter values. Since we don't know what the actual parameter values are, one way of estimating them is to use the value that maximizes the given likelihood. Neural networks are our model, maximum likelihood estimation is one method for estimating the parameters of our model.



      One slightly technical note is that often, Maximum Likelihood Estimation is not exactly used in Neural Networks. That is, there are a lot of regularization methods used that imply we're not actually maximizing a likelihood function. These include:



      (1) Penalized maximum likelihood. This one is a bit of a cop-out, as it doesn't actually take too much effort to think of Penalized likelihoods as actually just a different likelihood (i.e., one with priors) that one is maximizing.



      (2) Random drop out. In especially a lot of the newer architectures, parameter values will randomly be set to 0 during training. This procedure is more definitely outside the realm of maximum likelihood estimation.



      (3) Early stopping. It's not the most popular method at all, but one way to prevent overfitting is just to stop the optimization algorithm before it converges. Again, this is technically not maximum likelihood estimation, it's really just an ad-hoc solution to overfitting.



      (4) Bayesian methods, probably the most common alternative to Maximum Likelihood Estimation in the statistics world, are also used for estimating the parameter values of a neural network. However, this is often too computationally intensive for large networks.






      share|cite|improve this answer











      $endgroup$















        0












        0








        0





        $begingroup$

        These are fairly orthogonal topics.



        Neural networks are a type of model which has a very large number of parameters. Maximum Likelihood Estimation is a very common method for estimating parameters from a given model and data. Typically, a model will allow you to compute a likelihood function from a model, data and parameter values. Since we don't know what the actual parameter values are, one way of estimating them is to use the value that maximizes the given likelihood. Neural networks are our model, maximum likelihood estimation is one method for estimating the parameters of our model.



        One slightly technical note is that often, Maximum Likelihood Estimation is not exactly used in Neural Networks. That is, there are a lot of regularization methods used that imply we're not actually maximizing a likelihood function. These include:



        (1) Penalized maximum likelihood. This one is a bit of a cop-out, as it doesn't actually take too much effort to think of Penalized likelihoods as actually just a different likelihood (i.e., one with priors) that one is maximizing.



        (2) Random drop out. In especially a lot of the newer architectures, parameter values will randomly be set to 0 during training. This procedure is more definitely outside the realm of maximum likelihood estimation.



        (3) Early stopping. It's not the most popular method at all, but one way to prevent overfitting is just to stop the optimization algorithm before it converges. Again, this is technically not maximum likelihood estimation, it's really just an ad-hoc solution to overfitting.



        (4) Bayesian methods, probably the most common alternative to Maximum Likelihood Estimation in the statistics world, are also used for estimating the parameter values of a neural network. However, this is often too computationally intensive for large networks.






        share|cite|improve this answer











        $endgroup$



        These are fairly orthogonal topics.



        Neural networks are a type of model which has a very large number of parameters. Maximum Likelihood Estimation is a very common method for estimating parameters from a given model and data. Typically, a model will allow you to compute a likelihood function from a model, data and parameter values. Since we don't know what the actual parameter values are, one way of estimating them is to use the value that maximizes the given likelihood. Neural networks are our model, maximum likelihood estimation is one method for estimating the parameters of our model.



        One slightly technical note is that often, Maximum Likelihood Estimation is not exactly used in Neural Networks. That is, there are a lot of regularization methods used that imply we're not actually maximizing a likelihood function. These include:



        (1) Penalized maximum likelihood. This one is a bit of a cop-out, as it doesn't actually take too much effort to think of Penalized likelihoods as actually just a different likelihood (i.e., one with priors) that one is maximizing.



        (2) Random drop out. In especially a lot of the newer architectures, parameter values will randomly be set to 0 during training. This procedure is more definitely outside the realm of maximum likelihood estimation.



        (3) Early stopping. It's not the most popular method at all, but one way to prevent overfitting is just to stop the optimization algorithm before it converges. Again, this is technically not maximum likelihood estimation, it's really just an ad-hoc solution to overfitting.



        (4) Bayesian methods, probably the most common alternative to Maximum Likelihood Estimation in the statistics world, are also used for estimating the parameter values of a neural network. However, this is often too computationally intensive for large networks.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited 3 hours ago

























        answered 4 hours ago









        Cliff ABCliff AB

        13.8k12567




        13.8k12567




















            aca06 is a new contributor. Be nice, and check out our Code of Conduct.









            draft saved

            draft discarded


















            aca06 is a new contributor. Be nice, and check out our Code of Conduct.












            aca06 is a new contributor. Be nice, and check out our Code of Conduct.











            aca06 is a new contributor. Be nice, and check out our Code of Conduct.














            Thanks for contributing an answer to Cross Validated!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f402511%2fis-it-correct-to-say-the-neural-networks-are-an-alternative-way-of-performing-ma%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            How to make RAID controller rescan devices The 2019 Stack Overflow Developer Survey Results Are InLSI MegaRAID SAS 9261-8i: Disk isn't recognized after replacementHow to monitor the hard disk status behind Dell PERC H710 Raid Controller with CentOS 6?LSI MegaRAID - Recreate missing RAID 1 arrayext. 2-bay USB-Drive with RAID: btrfs RAID vs built-in RAIDInvalid SAS topologyDoes enabling JBOD mode on LSI based controllers affect existing logical disks/arrays?Why is there a shift between the WWN reported from the controller and the Linux system?Optimal RAID 6+0 Setup for 40+ 4TB DisksAccidental SAS cable removal

            Free operad over a monoid object The 2019 Stack Overflow Developer Survey Results Are InAn interpretation of this construction giving an operad from a bialgebra?What is the free monoidal category generated by a monoid?Unitalization internal to monoidal categoriesCorrespondence between operads and $infty$-operads with one objectCorrespondence between operads and monads requires tensor distribute over coproduct?understanding the definition of $infty$-operad of module objectsReference for “multi-monoidal categories”When is a quasicategory over $N(Delta)^op$ a planar $infty$-operad?An interpretation of this construction giving an operad from a bialgebra?$H$-space structure on coloured algebrasTwo monoidal structures and copowering

            Србија Садржај Етимологија Географија Историја Политички систем и уставно-правно уређење Становништво Привреда Образовање Култура Спорт Државни празници Галерија Напомене Референце Литература Спољашње везе Мени за навигацију44°48′N 20°28′E / 44.800° СГШ; 20.467° ИГД / 44.800; 20.46744°48′N 20°28′E / 44.800° СГШ; 20.467° ИГД / 44.800; 20.467ууРезултати пописа 2011. према старости и полуу„Положај, рељеф и клима”„Europe: Serbia”„Основни подаци”„Gross domestic product based on purchasing-power-parity (PPP) valuation of country GDP”„Human Development Report 2018 – "Human Development Indices and Indicators 6”„Устав Републике Србије”Правопис српскога језикаGoogle DriveComparative Hungarian Cultural StudiesCalcium and Magnesium in Groundwater: Occurrence and Significance for Human Health„UNSD — Methodology”„Процене становништва | Републички завод за статистику Србије”The Age of Nepotism: Travel Journals and Observations from the Balkans During the Depression„The Serbian Revolution and the Serbian State”„Устав Србије”„Serbia a few steps away from concluding WTO accession negotiations”„A credible enlargement perspective for and enhanced EU engagement with the Western Balkans”„Freedom in the World 2017”„Serbia: On the Way to EU Accession”„Human Development Indices and Indicators: 2018 Statistical Update”„2018 Social Progress Index”„Global Peace Index”Sabres of Two Easts: An Untold History of Muslims in Eastern Europe, Their Friends and Foes„Пројекат Растко—Лузица”„Serbia: Introduction”„Serbia”оригинала„The World Factbook: Serbia”„The World Factbook: Kosovo”„Border Police Department”„Uredba o kontroli prelaska administrativne linije prema Autonomnoj pokrajini Kosovo i Metohija”оригиналаIvana Carevic, Velimir Jovanovic, STRATIGRAPHIC-STRUCTURAL CHARACTERISTICS OF MAČVA BASIN, UDC 911.2:551.7(497.11), pp. 1Archived„About the Carpathians – Carpathian Heritage Society”оригинала„O Srbiji”оригинала„Статистички годишњак Србије, 2009: Географски прегледГеографија за осми разред основне школе„Отворена, електронска база едукационих радова”„Влада Републике Србије: Положај, рељеф и клима”„Копрен (Стара планина)”„Туристичка дестинација-Србија”„Висина водопада”„РХМЗ — Републички Хидрометеоролошки завод Србије Кнеза Вишеслава 66 Београд”„Фауна Србије”„Српске шуме на издисају”„Lepih šest odsto Srbije”„Илустрована историја Срба — Увод”„Винчанска култура - Градска општина Гроцка”„''„Винча — Праисторијска метропола”''”оригиналаЈужни Словени под византијском влашћу (600—1025)Држава маћедонских Словена„Карађорђе истина и мит, Проф. др Радош Љушић, Вечерње новости, фељтон, 18 наставака, 24. август - 10. септембар 2003.”„Политика: Како је утврђена војна неутралност, 13. јануар. 2010, приступљено децембра 2012.”„Србија и РС оживеле Дејтонски споразум”„Са српским пасошем у 104 земље”Војска Србије | О Војсци | Војска Србије — Улога, намена и задациАрхивираноВојска Србије | ОрганизацијаАрхивираноОдлука о изради Стратегије просторног развоја Републике Србије до 2020. годинеЗакон о територијалној организацији Републике СрбијеЗакон о државној управиНајчешће постављана питања.„Смањење броја статистичких региона кроз измене Закона о регионалном развоју”„2011 Human development Report”„Službena upotreba jezika i pisama”„Попис становништва, домаћинстава и станова 2011. године у Републици Србији. Књига 4: Вероисповест, матерњи језик и национална припадност”„Вероисповест, матерњи језик и национална”„Специјална известитељка УН за слободу религије и вероисповести Асма Јахангир, код Заштитника грађана Саше Јанковића”„Закон о државним и другим празницима у Републици Србији”„Веронаука у српским школама”„Serbia – Ancestral Genography Atlas”Бела књига Милошевићеве владавинеоригиналаGross domestic product based on purchasing-power-parity (PPP) per capita GDP БДП 2007—2013Актуелни показатељи — Република Србија„Попис становништва, домаћинстава и станова 2011. године у Републици Србији Књига 7: Економска активност”Zemlje kandidati za članstvo u EU„Putin drops South Stream gas pipeline to EU, courts Turkey”„„Соко — историјат””оригинала„„Рембас — историјат””оригинала„„Лубница — историјат””оригинала„„Штаваљ — Историјат””оригинала„„Боговина — историјат””оригинала„„Јасеновац — историјат””оригинала„„Вршка чука — историјат””оригинала„„Ибарски рудници — историјат””оригинала„Закон о просторном плану Републике Србије од 2010 до 2020”„Кривични законик — Недозвољена изградња нуклеарних постројења, члан 267”„Б92: Srbija uklonila obogaćeni uranijum, 25. октобар 2011”„Коришћење енергије ветра у Србији — природни услови и практична примена”„Енергија ветра”„Србија може да прави струју од сунца, биомасе, воде и ветра”„Моја електрана и друге ветрењаче”„Биомаса, струја без инвестиција”„Auto-karte Srbije”„www.srbija.gov.rs Статистике о Србији”оригинала„Статистика зе месец децембар и 2016. годину”„Turizam u Srbiji”„Univerzitet u Beogradu: Vek i po akademskog znanja”„Vojnomedicinska akademija: 165 godina tradicije i napretka”Никола Гиљен, Соња Јовићевић Јов и Јелена Мандић: Мирослављево јеванђеље; Текст је публикован у ревији „Историја” и настао је као део научно-истраживачког рада Фонда „Принцеза Оливера”„World music асоцијација Србије”оригинала„World music у Србији”оригинала„Pogledajte: Boban Marković svira u redakciji „Blica”!”„Eurovision Song Contest 2007 Final”„Projekat Rastko, Alojz Ujes: Joakim Vujic”„Унеско”„Списак локалитета Светске баштине”„Guča i Egzit zaludeli svet”оригинала„Sabor trubača GUČA”„Interesting facts about Exit”оригинала„FIFA Association Information”„Serbia women win EuroBasket title, gain first Olympics berth”„Odbojkašice ispisale istoriju – Srbija je svetski prvak!”„Сајт Ватерполо савеза Србије, Освојене медаље”„Сајт ФК Црвена звезда, Бари”„Сајт ФК Црвена звезда, Токио”„Blic:Zlatna Milica! Mandićeva donela Srbiji najsjajnije odličje u Londonu!”„Милица Мандић освојила златну медаљу („Политика”, 12. август 2012)”„Златни Давор Штефанек”„DŽUDO ŠAMPIONAT Majdov osvojio svetsko zlato”„Španovićeva trećim skokom svih vremena do zlata!”„Чудо Иване Шпановић — 7,24 м („Политика”, 5. март 2017)”The Age of Nepotism: Travel Journals and Observations from the Balkans During the DepressionCalcium and Magnesium in Groundwater: Occurrence and Significance for Human HealthComparative Hungarian Cultural StudiesБела књига Милошевићеве владавинеоригиналаComparative Hungarian Cultural StudiesSabres of Two Easts: An Untold History of Muslims in Eastern Europe, Their Friends and FoesГеографија за осми разред основне школеSerbia: the country, people, life, customsМедијиВодичПодациВлада Републике СрбијеНародна скупштина Републике СрбијеНародна канцеларија председника Републике СрбијеНародна банка СрбијеТуристичка организација СрбијеПортал еУправе Републике СрбијеРепубличко јавно правобранилаштвоууууууWorldCat151202876n851959190000 0000 9526 67094054598-24101000570825ge130919