Is it correct to say the Neural Networks are an alternative way of performing Maximum Likelihood Estimation? if not, why? The 2019 Stack Overflow Developer Survey Results Are InCan we use MLE to estimate Neural Network weights?Are loss functions what define the identity of each supervised machine learning algorithm?What can we say about the likelihood function, besides using it in maximum likelihood estimation?Why is maximum likelihood estimation considered to be a frequentist techniqueMaximum Likelihood Estimation — why it is used despite being biased in many casesWhat is the objective of maximum likelihood estimation?Maximum Likelihood estimation and the Kalman filterWhy does Maximum Likelihood estimation maximizes probability density instead of probabilityWhy are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the errors are not normally distributed?the relationship between maximizing the likelihood and minimizing the cross-entropythe meaning of likelihood in maximum likelihood estimationHow to construct a cross-entropy loss for general regression targets?

What do I do when my TA workload is more than expected?

Worn-tile Scrabble

How much of the clove should I use when using big garlic heads?

Why don't hard Brexiteers insist on a hard border to prevent illegal immigration after Brexit?

Why doesn't UInt have a toDouble()?

Does HR tell a hiring manager about salary negotiations?

Old scifi movie from the 50s or 60s with men in solid red uniforms who interrogate a spy from the past

Will it cause any balance problems to have PCs level up and gain the benefits of a long rest mid-fight?

If my opponent casts Ultimate Price on my Phantasmal Bear, can I save it by casting Snap or Curfew?

Can a flute soloist sit?

Why isn't the circumferential light around the M87 black hole's event horizon symmetric?

Is it possible for absolutely everyone to attain enlightenment?

How can I add encounters in the Lost Mine of Phandelver campaign without giving PCs too much XP?

Why not take a picture of a closer black hole?

Is bread bad for ducks?

What could be the right powersource for 15 seconds lifespan disposable giant chainsaw?

What is the motivation for a law requiring 2 parties to consent for recording a conversation

Can you cast a spell on someone in the Ethereal Plane, if you are on the Material Plane and have the True Seeing spell active?

Geography at the pixel level

Is it a good practice to use a static variable in a Test Class and use that in the actual class instead of Test.isRunningTest()?

APIPA and LAN Broadcast Domain

Why are there uneven bright areas in this photo of black hole?

What is the meaning of Triage in Cybersec world?

What information about me do stores get via my credit card?



Is it correct to say the Neural Networks are an alternative way of performing Maximum Likelihood Estimation? if not, why?



The 2019 Stack Overflow Developer Survey Results Are InCan we use MLE to estimate Neural Network weights?Are loss functions what define the identity of each supervised machine learning algorithm?What can we say about the likelihood function, besides using it in maximum likelihood estimation?Why is maximum likelihood estimation considered to be a frequentist techniqueMaximum Likelihood Estimation — why it is used despite being biased in many casesWhat is the objective of maximum likelihood estimation?Maximum Likelihood estimation and the Kalman filterWhy does Maximum Likelihood estimation maximizes probability density instead of probabilityWhy are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the errors are not normally distributed?the relationship between maximizing the likelihood and minimizing the cross-entropythe meaning of likelihood in maximum likelihood estimationHow to construct a cross-entropy loss for general regression targets?



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








3












$begingroup$


We often say that minimizing the (negative) cross-entropy error is the same as maximizing the likelihood. So can we say that NN are just an alternative way of performing Maximum Likelihood Estimation? if not, why?










share|cite|improve this question







New contributor




aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$







  • 1




    $begingroup$
    Possible duplicate of Can we use MLE to estimate Neural Network weights?
    $endgroup$
    – Sycorax
    7 hours ago

















3












$begingroup$


We often say that minimizing the (negative) cross-entropy error is the same as maximizing the likelihood. So can we say that NN are just an alternative way of performing Maximum Likelihood Estimation? if not, why?










share|cite|improve this question







New contributor




aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$







  • 1




    $begingroup$
    Possible duplicate of Can we use MLE to estimate Neural Network weights?
    $endgroup$
    – Sycorax
    7 hours ago













3












3








3


2



$begingroup$


We often say that minimizing the (negative) cross-entropy error is the same as maximizing the likelihood. So can we say that NN are just an alternative way of performing Maximum Likelihood Estimation? if not, why?










share|cite|improve this question







New contributor




aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$




We often say that minimizing the (negative) cross-entropy error is the same as maximizing the likelihood. So can we say that NN are just an alternative way of performing Maximum Likelihood Estimation? if not, why?







neural-networks maximum-likelihood






share|cite|improve this question







New contributor




aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|cite|improve this question







New contributor




aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|cite|improve this question




share|cite|improve this question






New contributor




aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked 9 hours ago









aca06aca06

161




161




New contributor




aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







  • 1




    $begingroup$
    Possible duplicate of Can we use MLE to estimate Neural Network weights?
    $endgroup$
    – Sycorax
    7 hours ago












  • 1




    $begingroup$
    Possible duplicate of Can we use MLE to estimate Neural Network weights?
    $endgroup$
    – Sycorax
    7 hours ago







1




1




$begingroup$
Possible duplicate of Can we use MLE to estimate Neural Network weights?
$endgroup$
– Sycorax
7 hours ago




$begingroup$
Possible duplicate of Can we use MLE to estimate Neural Network weights?
$endgroup$
– Sycorax
7 hours ago










2 Answers
2






active

oldest

votes


















3












$begingroup$

In abstract terms, neural networks are models, or if you prefer, functions with unknown parameters, where we try to learn the parameter by minimizing loss function (not just cross entropy, there are many other possibilities). In general, minimizing loss is in most cases equivalent to maximizing some likelihood function, but as discussed in this thread, it's not that simple.



You cannot say that they are equivalent, because minimizing loss, or maximizing likelihood is a method of finding the parameters, while neural network is the function defined in terms of those parameters.






share|cite|improve this answer









$endgroup$








  • 1




    $begingroup$
    I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
    $endgroup$
    – Sycorax
    6 hours ago







  • 1




    $begingroup$
    @Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
    $endgroup$
    – Tim
    6 hours ago






  • 1




    $begingroup$
    What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
    $endgroup$
    – aca06
    6 hours ago






  • 2




    $begingroup$
    @aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
    $endgroup$
    – Tim
    6 hours ago


















0












$begingroup$

These are fairly orthogonal topics.



Neural networks are a type of model which has a very large number of parameters. Maximum Likelihood Estimation is a very common method for estimating parameters from a given model and data. Typically, a model will allow you to compute a likelihood function from a model, data and parameter values. Since we don't know what the actual parameter values are, one way of estimating them is to use the value that maximizes the given likelihood. Neural networks are our model, maximum likelihood estimation is one method for estimating the parameters of our model.



One slightly technical note is that often, Maximum Likelihood Estimation is not exactly used in Neural Networks. That is, there are a lot of regularization methods used that imply we're not actually maximizing a likelihood function. These include:



(1) Penalized maximum likelihood. This one is a bit of a cop-out, as it doesn't actually take too much effort to think of Penalized likelihoods as actually just a different likelihood (i.e., one with priors) that one is maximizing.



(2) Random drop out. In especially a lot of the newer architectures, parameter values will randomly be set to 0 during training. This procedure is more definitely outside the realm of maximum likelihood estimation.



(3) Early stopping. It's not the most popular method at all, but one way to prevent overfitting is just to stop the optimization algorithm before it converges. Again, this is technically not maximum likelihood estimation, it's really just an ad-hoc solution to overfitting.



(4) Bayesian methods, probably the most common alternative to Maximum Likelihood Estimation in the statistics world, are also used for estimating the parameter values of a neural network. However, this is often too computationally intensive for large networks.






share|cite|improve this answer











$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "65"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );






    aca06 is a new contributor. Be nice, and check out our Code of Conduct.









    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f402511%2fis-it-correct-to-say-the-neural-networks-are-an-alternative-way-of-performing-ma%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    3












    $begingroup$

    In abstract terms, neural networks are models, or if you prefer, functions with unknown parameters, where we try to learn the parameter by minimizing loss function (not just cross entropy, there are many other possibilities). In general, minimizing loss is in most cases equivalent to maximizing some likelihood function, but as discussed in this thread, it's not that simple.



    You cannot say that they are equivalent, because minimizing loss, or maximizing likelihood is a method of finding the parameters, while neural network is the function defined in terms of those parameters.






    share|cite|improve this answer









    $endgroup$








    • 1




      $begingroup$
      I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
      $endgroup$
      – Sycorax
      6 hours ago







    • 1




      $begingroup$
      @Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
      $endgroup$
      – Tim
      6 hours ago






    • 1




      $begingroup$
      What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
      $endgroup$
      – aca06
      6 hours ago






    • 2




      $begingroup$
      @aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
      $endgroup$
      – Tim
      6 hours ago















    3












    $begingroup$

    In abstract terms, neural networks are models, or if you prefer, functions with unknown parameters, where we try to learn the parameter by minimizing loss function (not just cross entropy, there are many other possibilities). In general, minimizing loss is in most cases equivalent to maximizing some likelihood function, but as discussed in this thread, it's not that simple.



    You cannot say that they are equivalent, because minimizing loss, or maximizing likelihood is a method of finding the parameters, while neural network is the function defined in terms of those parameters.






    share|cite|improve this answer









    $endgroup$








    • 1




      $begingroup$
      I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
      $endgroup$
      – Sycorax
      6 hours ago







    • 1




      $begingroup$
      @Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
      $endgroup$
      – Tim
      6 hours ago






    • 1




      $begingroup$
      What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
      $endgroup$
      – aca06
      6 hours ago






    • 2




      $begingroup$
      @aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
      $endgroup$
      – Tim
      6 hours ago













    3












    3








    3





    $begingroup$

    In abstract terms, neural networks are models, or if you prefer, functions with unknown parameters, where we try to learn the parameter by minimizing loss function (not just cross entropy, there are many other possibilities). In general, minimizing loss is in most cases equivalent to maximizing some likelihood function, but as discussed in this thread, it's not that simple.



    You cannot say that they are equivalent, because minimizing loss, or maximizing likelihood is a method of finding the parameters, while neural network is the function defined in terms of those parameters.






    share|cite|improve this answer









    $endgroup$



    In abstract terms, neural networks are models, or if you prefer, functions with unknown parameters, where we try to learn the parameter by minimizing loss function (not just cross entropy, there are many other possibilities). In general, minimizing loss is in most cases equivalent to maximizing some likelihood function, but as discussed in this thread, it's not that simple.



    You cannot say that they are equivalent, because minimizing loss, or maximizing likelihood is a method of finding the parameters, while neural network is the function defined in terms of those parameters.







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered 6 hours ago









    TimTim

    60k9133229




    60k9133229







    • 1




      $begingroup$
      I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
      $endgroup$
      – Sycorax
      6 hours ago







    • 1




      $begingroup$
      @Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
      $endgroup$
      – Tim
      6 hours ago






    • 1




      $begingroup$
      What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
      $endgroup$
      – aca06
      6 hours ago






    • 2




      $begingroup$
      @aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
      $endgroup$
      – Tim
      6 hours ago












    • 1




      $begingroup$
      I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
      $endgroup$
      – Sycorax
      6 hours ago







    • 1




      $begingroup$
      @Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
      $endgroup$
      – Tim
      6 hours ago






    • 1




      $begingroup$
      What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
      $endgroup$
      – aca06
      6 hours ago






    • 2




      $begingroup$
      @aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
      $endgroup$
      – Tim
      6 hours ago







    1




    1




    $begingroup$
    I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
    $endgroup$
    – Sycorax
    6 hours ago





    $begingroup$
    I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
    $endgroup$
    – Sycorax
    6 hours ago





    1




    1




    $begingroup$
    @Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
    $endgroup$
    – Tim
    6 hours ago




    $begingroup$
    @Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
    $endgroup$
    – Tim
    6 hours ago




    1




    1




    $begingroup$
    What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
    $endgroup$
    – aca06
    6 hours ago




    $begingroup$
    What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
    $endgroup$
    – aca06
    6 hours ago




    2




    2




    $begingroup$
    @aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
    $endgroup$
    – Tim
    6 hours ago




    $begingroup$
    @aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
    $endgroup$
    – Tim
    6 hours ago













    0












    $begingroup$

    These are fairly orthogonal topics.



    Neural networks are a type of model which has a very large number of parameters. Maximum Likelihood Estimation is a very common method for estimating parameters from a given model and data. Typically, a model will allow you to compute a likelihood function from a model, data and parameter values. Since we don't know what the actual parameter values are, one way of estimating them is to use the value that maximizes the given likelihood. Neural networks are our model, maximum likelihood estimation is one method for estimating the parameters of our model.



    One slightly technical note is that often, Maximum Likelihood Estimation is not exactly used in Neural Networks. That is, there are a lot of regularization methods used that imply we're not actually maximizing a likelihood function. These include:



    (1) Penalized maximum likelihood. This one is a bit of a cop-out, as it doesn't actually take too much effort to think of Penalized likelihoods as actually just a different likelihood (i.e., one with priors) that one is maximizing.



    (2) Random drop out. In especially a lot of the newer architectures, parameter values will randomly be set to 0 during training. This procedure is more definitely outside the realm of maximum likelihood estimation.



    (3) Early stopping. It's not the most popular method at all, but one way to prevent overfitting is just to stop the optimization algorithm before it converges. Again, this is technically not maximum likelihood estimation, it's really just an ad-hoc solution to overfitting.



    (4) Bayesian methods, probably the most common alternative to Maximum Likelihood Estimation in the statistics world, are also used for estimating the parameter values of a neural network. However, this is often too computationally intensive for large networks.






    share|cite|improve this answer











    $endgroup$

















      0












      $begingroup$

      These are fairly orthogonal topics.



      Neural networks are a type of model which has a very large number of parameters. Maximum Likelihood Estimation is a very common method for estimating parameters from a given model and data. Typically, a model will allow you to compute a likelihood function from a model, data and parameter values. Since we don't know what the actual parameter values are, one way of estimating them is to use the value that maximizes the given likelihood. Neural networks are our model, maximum likelihood estimation is one method for estimating the parameters of our model.



      One slightly technical note is that often, Maximum Likelihood Estimation is not exactly used in Neural Networks. That is, there are a lot of regularization methods used that imply we're not actually maximizing a likelihood function. These include:



      (1) Penalized maximum likelihood. This one is a bit of a cop-out, as it doesn't actually take too much effort to think of Penalized likelihoods as actually just a different likelihood (i.e., one with priors) that one is maximizing.



      (2) Random drop out. In especially a lot of the newer architectures, parameter values will randomly be set to 0 during training. This procedure is more definitely outside the realm of maximum likelihood estimation.



      (3) Early stopping. It's not the most popular method at all, but one way to prevent overfitting is just to stop the optimization algorithm before it converges. Again, this is technically not maximum likelihood estimation, it's really just an ad-hoc solution to overfitting.



      (4) Bayesian methods, probably the most common alternative to Maximum Likelihood Estimation in the statistics world, are also used for estimating the parameter values of a neural network. However, this is often too computationally intensive for large networks.






      share|cite|improve this answer











      $endgroup$















        0












        0








        0





        $begingroup$

        These are fairly orthogonal topics.



        Neural networks are a type of model which has a very large number of parameters. Maximum Likelihood Estimation is a very common method for estimating parameters from a given model and data. Typically, a model will allow you to compute a likelihood function from a model, data and parameter values. Since we don't know what the actual parameter values are, one way of estimating them is to use the value that maximizes the given likelihood. Neural networks are our model, maximum likelihood estimation is one method for estimating the parameters of our model.



        One slightly technical note is that often, Maximum Likelihood Estimation is not exactly used in Neural Networks. That is, there are a lot of regularization methods used that imply we're not actually maximizing a likelihood function. These include:



        (1) Penalized maximum likelihood. This one is a bit of a cop-out, as it doesn't actually take too much effort to think of Penalized likelihoods as actually just a different likelihood (i.e., one with priors) that one is maximizing.



        (2) Random drop out. In especially a lot of the newer architectures, parameter values will randomly be set to 0 during training. This procedure is more definitely outside the realm of maximum likelihood estimation.



        (3) Early stopping. It's not the most popular method at all, but one way to prevent overfitting is just to stop the optimization algorithm before it converges. Again, this is technically not maximum likelihood estimation, it's really just an ad-hoc solution to overfitting.



        (4) Bayesian methods, probably the most common alternative to Maximum Likelihood Estimation in the statistics world, are also used for estimating the parameter values of a neural network. However, this is often too computationally intensive for large networks.






        share|cite|improve this answer











        $endgroup$



        These are fairly orthogonal topics.



        Neural networks are a type of model which has a very large number of parameters. Maximum Likelihood Estimation is a very common method for estimating parameters from a given model and data. Typically, a model will allow you to compute a likelihood function from a model, data and parameter values. Since we don't know what the actual parameter values are, one way of estimating them is to use the value that maximizes the given likelihood. Neural networks are our model, maximum likelihood estimation is one method for estimating the parameters of our model.



        One slightly technical note is that often, Maximum Likelihood Estimation is not exactly used in Neural Networks. That is, there are a lot of regularization methods used that imply we're not actually maximizing a likelihood function. These include:



        (1) Penalized maximum likelihood. This one is a bit of a cop-out, as it doesn't actually take too much effort to think of Penalized likelihoods as actually just a different likelihood (i.e., one with priors) that one is maximizing.



        (2) Random drop out. In especially a lot of the newer architectures, parameter values will randomly be set to 0 during training. This procedure is more definitely outside the realm of maximum likelihood estimation.



        (3) Early stopping. It's not the most popular method at all, but one way to prevent overfitting is just to stop the optimization algorithm before it converges. Again, this is technically not maximum likelihood estimation, it's really just an ad-hoc solution to overfitting.



        (4) Bayesian methods, probably the most common alternative to Maximum Likelihood Estimation in the statistics world, are also used for estimating the parameter values of a neural network. However, this is often too computationally intensive for large networks.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited 3 hours ago

























        answered 4 hours ago









        Cliff ABCliff AB

        13.8k12567




        13.8k12567




















            aca06 is a new contributor. Be nice, and check out our Code of Conduct.









            draft saved

            draft discarded


















            aca06 is a new contributor. Be nice, and check out our Code of Conduct.












            aca06 is a new contributor. Be nice, and check out our Code of Conduct.











            aca06 is a new contributor. Be nice, and check out our Code of Conduct.














            Thanks for contributing an answer to Cross Validated!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f402511%2fis-it-correct-to-say-the-neural-networks-are-an-alternative-way-of-performing-ma%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            How to make RAID controller rescan devices The 2019 Stack Overflow Developer Survey Results Are InLSI MegaRAID SAS 9261-8i: Disk isn't recognized after replacementHow to monitor the hard disk status behind Dell PERC H710 Raid Controller with CentOS 6?LSI MegaRAID - Recreate missing RAID 1 arrayext. 2-bay USB-Drive with RAID: btrfs RAID vs built-in RAIDInvalid SAS topologyDoes enabling JBOD mode on LSI based controllers affect existing logical disks/arrays?Why is there a shift between the WWN reported from the controller and the Linux system?Optimal RAID 6+0 Setup for 40+ 4TB DisksAccidental SAS cable removal

            Куамањотепек (Чилапа де Алварез) Садржај Становништво Види још Референце Спољашње везе Мени за навигацију17°19′47″N 99°1′51″W / 17.32972° СГШ; 99.03083° ЗГД / 17.32972; -99.0308317°19′47″N 99°1′51″W / 17.32972° СГШ; 99.03083° ЗГД / 17.32972; -99.030838877656„Instituto Nacional de Estadística y Geografía”„The GeoNames geographical database”Мексичка насељапроширитиуу

            Срби Садржај Географија Етимологија Генетика Историја Језик Религија Популација Познати Срби Види још Напомене Референце Извори Литература Спољашње везе Мени за навигацијууrs.one.un.orgАрхивираноАрхивирано из оригиналаПопис становништва из 2011. годинеCOMMUNITY PROFILE: SERB COMMUNITY„1996 population census in Bosnia and Herzegovina”„CIA - The World Factbook - Bosnia and Herzegovina”American FactFinder - Results„2011 National Household Survey: Data tables”„Srbi u Nemačkoj | Srbi u Njemačkoj | Zentralrat der Serben in Deutschland”оригинала„Vesti online - Srpski informativni portal”„The Serbian Diaspora and Youth: Cross-Border Ties and Opportunities for Development”оригиналаSerben-Demo eskaliert in Wien„The People of Australia – Statistics from the 2011 Census”„Erstmals über eine Million EU- und EFTA Angehörige in der Schweiz”STANOVNIŠTVO PREMA NARODNOSTI – DETALJNA KLASIFIKACIJA – POPIS 2011.(Завод за статистику Црне Горе)title=Présentation de la République de SerbieSerbian | EthnologuePopulation by ethnic affiliation, Slovenia, Census 1953, 1961, 1971, 1981, 1991 and 2002Попис на населението, домаќинствата и становите во Република Македонија, 2002: Дефинитивни податоциALBANIJA ETNIČKI ČISTI SRBE: Iščezlo 100.000 ljudi pokrštavanjem, kao što su to radile ustaše u NDH! | Telegraf – Najnovije vestiИз удаљене Аргентине„Tab11. Populaţia stabilă după etnie şi limba maternă, pe categorii de localităţi”Суседи броје Србе„Srpska Dijaspora”оригиналаMinifacts about Norway 2012„Statistiques - 01.06.2008”ПРЕДСЕДНИК СРБИЈЕ СА СРБИМА У БРАТИСЛАВИСлавка Драшковић: Многа питања Срба у Црној Гори нерешенаThe Spread of the SlavesGoogle Book„Distribution of European Y-chromosome DNA (Y-DNA) haplogroups by country in percentage”American Journal of Physical Anthropology 142:380–390 (2010)„Архивирана копија”оригинала„Haplogroup I2 (Y-DNA)”„Архивирана копија”оригиналаVTS 01 1 - YouTubeПрви сукоби Срба и Турака - Политикин забавникАрхивираноConstantine Porphyrogenitus: De Administrando ImperioВизантиски извори за историју народа ЈугославијеDe conversione Croatorum et Serborum: A Lost SourceDe conversione Croatorum et Serborum: Изгубљени извор Константина ПорфирогенитаИсторија српске државностиИсторија српског народаСрбофобија и њени извориСерска област после Душанове смртиИсторија ВизантијеИсторија средњовековне босанске државеСрби међу европским народимаСрби у средњем векуМедијиПодациууууу00577267