Author Information

Alexander WheelerFollow

group

What campus are you from?

Daytona Beach

Authors' Class Standing

Alexander Wheeler, Sophomore Pietro Furlan, Sophomore Nathaniel Skarupa, Sophomore James Wilburn, Sophomore Nicholas Alberts, Sophomore Mason Cisco, Sophomore"

Lead Presenter's Name

Alexander Wheeler

Faculty Mentor Name

Dr. Hemanta Kunwar

Abstract

AI learning models use mathematical techniques to effectively minimize errors and improve prediction accuracy. Essential to these techniques is the use of the gradient descent, which outputs the correct way to adjust the model’s parameters to most effectively minimize error. The gradient descent is applied at the point of error produced by the program’s loss-function, a multi-variable function that mathematically represents the magnitude of error. This paper focuses on the implementation of the gradient descent, and its optimization of updating the models’ parameters. Multiple types of loss functions are examined, such as squared error and cross-entropy, and their influence on the gradient descent. By connecting the mathematical concept of the gradient descent and its real world meaning, this paper will enhance the understanding of the crucial process that drives model learning in modern Artificial Intelligence programs.

Did this research project receive funding support from the Office of Undergraduate Research.

No

Share

COinS
 

The use of descent gradients in AI Learning models

AI learning models use mathematical techniques to effectively minimize errors and improve prediction accuracy. Essential to these techniques is the use of the gradient descent, which outputs the correct way to adjust the model’s parameters to most effectively minimize error. The gradient descent is applied at the point of error produced by the program’s loss-function, a multi-variable function that mathematically represents the magnitude of error. This paper focuses on the implementation of the gradient descent, and its optimization of updating the models’ parameters. Multiple types of loss functions are examined, such as squared error and cross-entropy, and their influence on the gradient descent. By connecting the mathematical concept of the gradient descent and its real world meaning, this paper will enhance the understanding of the crucial process that drives model learning in modern Artificial Intelligence programs.

 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.