Loading [MathJax]/extensions/MathZoom.js
IEEE Xplore Search Results

Showing 1-25 of 18,659 resultsfor

Results

In view of the possibility that Recurrent Neural Network(RNN)’s stochastic gradient descent method will converge to the local optimum problem, two fractional stochastic gradient descent methods are proposed in this paper. The methods respectively use the fractional order substitution derivative part defined by Caputo and the fractional order substitution difference form defined by Riemann Liouvill...Show More
Fast automatic image registration is an important prerequisite for image-guided clinical procedures. However, due to the large number of voxels in an image and the complexity of registration algorithms, this process is often very slow. Stochastic gradient descent is a powerful method to iteratively solve the registration problem, but relies for convergence on a proper selection of the optimization...Show More
Backpropagation neural networks are commonly utilized to solve complicated issues in various disciplines. However, optimizing their settings remains a significant task. Traditional gradient-based optimization methods, such as stochastic gradient descent (SGD), often exhibit slow convergence and hyperparameter sensitivity. An adaptive stochastic conjugate gradient (ASCG) optimization strategy for b...Show More
In wireless channel deterministic models, it is important part for the setting of environmental electrical parameters. Traditional calculation methods such as genetic algorithm and the ant colony algorithm are inefficient. This paper studies the stochastic gradient descent(SGD) algorithm to solve the electrical parameters. The simulation results show that the method based on ray tracing algorithm ...Show More
In this research we examine different methods of optimization for Artificial Neural Networks (ANNs) with Deep Learning (DL). We examine the generalization performance of some existing optimizers that exist in the Keras library as Stochastic Gradient Descent (SGD) with and without Nesterov momentum, Adam and AdaMax in solving regression tasks. We use the following activation functions: Rectified Li...Show More
Cost is the primary justification for outsourcing an organization’s IT hardware acquisition. By avoiding the need to hire more staff members and reducing the number of hours your present workers are required to work, outsourcing the purchase of hardware may help the business save money. Additionally, you won’t need to continually purchase brand-new IT hardware or software. Additionally, outsourcin...Show More
More than 64 million people in the world suffer from heart failure. Automated diagnostic methods using machine learning are an effective tool to detect the disease at an early stage. The study aims to build a prediction model for patients with suspected heart failure based on the linear regression model optimized with stochastic gradient method. An open dataset of patients with suspected heart fai...Show More
We generalize the convergence, the stability and the averaged converging rate of stochastic parallel gradient descent algorithm when it is used as control algorithm for adaptive optics system in theory. Analysis results show the adaptive optics system can obtain convergence, stability and a factor of n /square root (n is the number of control parameters) speeding compared with sequential gradient ...Show More
Plug-and-play (PnP) methods have recently emerged as a powerful framework for image reconstruction that can flexibly combine different physics-based observation models with data-driven image priors in the form of denoisers, and achieve state-of-the-art image reconstruction quality in many applications. In this paper, we aim to further improve the computational efficacy of PnP methods by designing ...Show More
Stochastic compositional optimization generalizes classic (non-compositional) stochastic optimization to the minimization of com-positions of functions. Each composition may introduce an additional expectation. The series of expectations may be nested. Stochastic compositional optimization is gaining popularity in applications such as meta learning. This paper presents a new Stochastically Correct...Show More
In this paper, we propose an iterative method for smartphone localization in a 5G network. The location estimation accuracy degrades for an inbuilt GPS smartphone in dense environments and indoor scenarios. We propose a combined model that uses the Received Signal Strength (RSS) from macrocells and femtocells thereby increasing the localization accuracy. The location is estimated by optimizing the...Show More
Interval type-2 fuzzy neural networks (IT2FNNs) have been widely used for modeling in industrial processes, and the efficient parameter learning methods are crucial for obtaining accurate models. However, the problem of low computational efficiency and poor convergence performance still exists in existing learning methods. In this study, a novel hierarchical learning algorithm is proposed for cons...Show More
In order to study the optimization algorithm of convolutional neural networks, this paper combines the traditional stochastic gradient descent method with momentum and fractional order optimization, and deduces the momentum based fractional order stochastic gradient descent (MFSGD) algorithm for the full connection layer and the convolution layer in convolutional neural networks, respectively. The...Show More
A fractional-order gradient descent method applicable to spiking neural networks(SNNs) is proposed for the problem of difficulty in training SNNs using stochastic gradient descent method. The method is an improvement of the location of the gradient backpropagation calculation in the training of SNNs and the output form of the last layer in the structure, respectively, and the convergence is proved...Show More
We propose mS2GD: a method incorporating a mini-batching scheme for improving the theoretical complexity and practical performance of semi-stochastic gradient descent (S2GD). We consider the problem of minimizing a strongly convex function represented as the sum of an average of a large number of smooth convex functions, and a simple nonsmooth convex regularizer. Our method first performs a determ...Show More
This brief proposes a new automatic model parameter selection approach for determining the optimal configuration of high-speed analog-to-digital converters (ADCs) using a combination of particle swarm optimization (PSO) and stochastic gradient descent (SGD) algorithm. The proposed hybrid method first initializes the PSO algorithm to search for optimal neural-network configuration via the particles...Show More
Stochastic compositional optimization generalizes classic (non-compositional) stochastic optimization to the minimization of compositions of functions. Each composition may introduce an additional expectation. The series of expectations may be nested. Stochastic compositional optimization is gaining popularity in applications such as reinforcement learning and meta learning. This paper presents a ...Show More
Large-scale classification is an important task of machine learning, especially in the smart city field, which is a big data environment. In recent years, single-threaded optimization algorithms can no longer meet the needs of big data applications. When a single machine is trained, it will face the problem of insufficient memory and limited computing capacity. Therefore, distributed algorithms ha...Show More
This paper studies the feasibility of solving matrix equations in the method of moment (MoM) based on the stochastic gradient descent (SGD) technique (SGD-MoM). We adopted the optimization techniques in machine learning to solve the matrix equations in MoM. Numerical result demonstrate the feasibility of the proposed method, and its accuracy and efficiency, compared with conventional iterative met...Show More
The convolutional neural networks (CNNs) are generally trained using stochastic gradient descent (SGD)-based optimization techniques. The existing SGD optimizers generally suffer with the overshooting of the minimum and oscillation near minimum. In this article, we propose a new approach, hereafter referred as AdaInject, for the gradient descent optimizers by injecting the second-order moment into...Show More
We present a technique to improve an optimization of deep neural networks by introducing favorable random gradients during an additional optimization sub-step that affects positively the training process. This technique allows training deep neural networks faster, resulting in smaller training loss. Only the random gradients that do not downgrade the network result on a training mini-batch are sel...Show More
In this paper, a novel variable order fractional gradient descent optimization algorithm is proposed, which generalizes the classical gradient descent method by introducing a kind of variable order fractional derivative. The derivative order is adjusted with the number of iterations. The convergence of the algorithm is analyzed. And the proposed method is also applied to the training of the full c...Show More
With the rapid development of neural network models, gradient descent as the most important optimization algorithm in neural network models has received extensive attention from academia. At present, neural network technology has been used in various fields, such as computer vision, natural language processing, and speech recognition. In recent years, many kinds of gradient descent algorithms have...Show More
We consider the distributed stochastic optimization problem of minimizing a nonconvex function f in an adversarial setting. All the w worker nodes in the network are expected to send their stochastic gradient vectors to the fusion center (or server). However, some (at most α-fraction) of the nodes may be Byzantines, which may send arbitrary vectors instead. Vanilla implementation of distributed st...Show More
Accurate classification of ocular conditions is crucial for the development of computer aided ophthalmic diagnosis system. Traditional methods relying on expert analysis can be subjective and time-consuming. This study presents a machine learning-based approach to classify eye images into six categories: normal, glaucoma, strabismus, bulging, uveitis, and cataract. We employed convolutional neural...Show More