Filter by type:

Sort by year:

[87] Quasi-Newton Methods for Machine Learning: Forget the Past, Just Sample

Albert S. Berahas, Majid Jahani, Peter Richtárik, Martin Takáč
2021 Journal Paper Optimization Methods and Software (OMS)

Abstract

We present two sampled quasi-Newton methods for deep learning: sampled LBFGS (S-LBFGS) and sampled LSR1 (S-LSR1). Contrary to the classical variants of these methods that sequentially build Hessian or inverse Hessian approximations as the optimization progresses, our proposed methods sample points randomly around the current iterate at every iteration to produce these approximations. As a result, the approximations constructed make use of more reliable (recent and local) information, and do not depend on past iterate information that could be significantly stale. Our proposed algorithms are efficient in terms of accessed data points (epochs) and have enough concurrency to take advantage of parallel/distributed computing environments. We provide convergence guarantees for our proposed methods. Numerical tests on a toy classification problem as well as on popular benchmarking neural network training tasks reveal that the methods outperform their classical variants.

[86] Improving Text-to-Image Synthesis Using Contrastive Learning

Hui Ye, Xiulong Yang, Martin Takáč, Rajshekhar Sunderraman, Shihao Ji
2021 Conference Paper The British Machine Vision Conference (BMVC 2021)

Abstract

The goal of text-to-image synthesis is to generate a visually realistic image that matches a given text description. In practice, the captions annotated by humans for the same image have large variance in terms of contents and the choice of words. The linguistic discrepancy between the captions of the identical image leads to the synthetic images deviating from the ground truth. To address this issue, we propose a contrastive learning approach to improve the quality and enhance the semantic consistency of synthetic images. In the pre-training stage, we utilize the contrastive learning approach to learn the consistent textual representations for the captions corresponding to the same image. Furthermore, in the following stage of GAN training, we employ the contrastive learning method to enhance the consistency between the generated images from the captions related to the same image. We evaluate our approach over two popular text-to-image synthesis models, AttnGAN and DM-GAN, on datasets CUB and COCO, respectively. Experimental results have shown that our approach can effectively improve the quality of synthetic images in terms of three metrics: IS, FID and R-precision. Especially, on the challenging COCO dataset, our approach boosts the FID significantly by 29.60% over AttnGAn and by 21.96% over DM-GAN.

[85] Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order Information

Majid Jahani, Sergey Rusakov, Zheng Shi, Peter Richtárik, Michael W. Mahoney, Martin Takáč
2021 arXiv Preprint

Abstract

We present a novel adaptive optimization algorithm for large-scale machine learning problems. Equipped with a low-cost estimate of local curvature and Lipschitz smoothness, our method dynamically adapts the search direction and step-size. The search direction contains gradient information preconditioned by a well-scaled diagonal preconditioning matrix that captures the local curvature information. Our methodology does not require the tedious task of learning rate tuning, as the learning rate is updated automatically without adding an extra hyperparameter. We provide convergence guarantees on a comprehensive collection of optimization problems, including convex, strongly convex, and nonconvex problems, in both deterministic and stochastic regimes. We also conduct an extensive empirical evaluation on standard machine learning problems, justifying our algorithm's versatility and demonstrating its strong performance compared to other start-of-the-art first-order and second-order methods.

[84] Fast and Safe: Accelerated gradient methods with optimality certificates and underestimate sequences

Majid Jahani, Naga Venkata C. Gudapati, Chenxin Ma, Rachael Tappenden, Martin Takáč
2021 Journal Paper Computational Optimization and Applications

Abstract

In this work we introduce the concept of an Underestimate Sequence (UES), which is a natural extension of Nesterov's estimate sequence. Our definition of a UES utilizes three sequences, one of which is a lower bound (or under-estimator) of the objective function. The question of how to construct an appropriate sequence of lower bounds is also addressed, and we present lower bounds for strongly convex smooth functions and for strongly convex composite functions, which adhere to the UES framework. Further, we propose several first order methods for minimizing strongly convex functions in both the smooth and composite cases. The algorithms, based on efficiently updating lower bounds on the objective functions, have natural stopping conditions, which provides the user with a certificate of optimality. Convergence of all algorithms is guaranteed through the UES framework, and we show that all presented algorithms converge linearly, with the accelerated variants enjoying the optimal linear rate of convergence.

[83] Classification-Aware Path Planning of Network of Robots

Guangyi Liu, Arash Amini, Martin Takáč, Nader Motee
2021 Conference Paper DARS 2021

[82] SONIA: A Symmetric Blockwise Truncated Optimization Algorithm

Majid Jahani, Mohammadreza Nazari, Rachael Tappenden, Albert S. Berahas, Martin Takáč
2021 Conference Paper AISTATS '21

Abstract

This work presents a new algorithm for empirical risk minimization. The algorithm bridges the gap between first- and second-order methods by computing a search direction that uses a second-order-type update in one subspace, coupled with a scaled steepest descent step in the orthogonal complement. To this end, partial curvature information is incorporated to help with ill-conditioning, while simultaneously allowing the algorithm to scale to the large problem dimensions often encountered in machine learning applications. Theoretical results are presented to confirm that the algorithm converges to a stationary point in both the strongly convex and nonconvex cases. A stochastic variant of the algorithm is also presented, along with corresponding theoretical guarantees. Numerical results confirm the strengths of the new approach on standard machine learning problems.

[81] A Deep Q-Network for the Beer Game: Deep Reinforcement Learning for Inventory Optimization

2021 Journal Paper Manufacturing and Service Operations Management (accepted)

Abstract

Problem definition: The beer game is widely used in supply chain management classes to demonstrate the bullwhip effect and the importance of supply chain coordination. The game is a decentralized, multiagent, cooperative problem that can be modeled as a serial supply chain network in which agents choose order quantities while cooperatively attempting to minimize the network’s total cost, although each agent only observes local information. Academic/practical relevance: Under some conditions, a base-stock replenishment policy is optimal. However, in a decentralized supply chain in which some agents act irrationally, there is no known optimal policy for an agent wishing to act optimally. Methodology: We propose a deep reinforcement learning (RL) algorithm to play the beer game. Our algorithm makes no assumptions about costs or other settings. As with any deep RL algorithm, training is computationally intensive, but once trained, the algorithm executes in real time. We propose a transfer-learning approach so that training performed for one agent can be adapted quickly for other agents and settings. Results: When playing with teammates who follow a base-stock policy, our algorithm obtains near-optimal order quantities. More important, it performs significantly better than a base-stock policy when other agents use a more realistic model of human ordering behavior. We observe similar results using a real-world data set. Sensitivity analysis shows that a trained model is robust to changes in the cost coefficients. Finally, applying transfer learning reduces the training time by one order of magnitude. Managerial implications: This paper shows how artificial intelligence can be applied to inventory optimization. Our approach can be extended to other supply chain optimization problems, especially those in which supply chain partners act in irrational or unpredictable ways. Our RL agent has been integrated into a new online beer game, which has been played more than 17,000 times by more than 4,000 people.

Zheng Shi, Nicolas Loizou, Peter Richtárik, Martin Takáč
2021 arXiv Preprint

Abstract

We present an adaptive stochastic variance reduced method with an implicit approach for adaptivity. As a variant of SARAH, our method employs the stochastic recursive gradient yet adjusts step-size based on local geometry. We provide convergence guarantees for finite-sum minimization problems and show a faster convergence than SARAH can be achieved if local geometry permits. Furthermore, we propose a practical, fully adaptive variant, which does not require any knowledge of local geometry and any effort of tuning the hyper-parameters. This algorithm implicitly computes step-size and efficiently estimates local Lipschitz smoothness of stochastic functions. The numerical experiments demonstrate the algorithm's strong performance compared to its classical counterparts and other state-of-the-art first-order methods.

[79] Active Metric Learning for Supervised Classification

Krishnan Kumaran, Dimitri Papageorgiou, Martin Takáč, Laurens Lueg, Nicolas Sahinidis
2021 Journal Paper Computers and Chemical Engineering

Abstract

Clustering and classification critically rely on distance metrics that provide meaningful comparisons between data points. To this end, learning optimal distance functions from data, known as metric learning, aims to facilitate supervised classification, particularly in high-dimensional spaces where visualization is challenging or infeasible. In particular, the Mahalanobis metric is the default choice due to simplicity and interpretability as a transformation of the simple Euclidean metric using a combination of rotation and scaling. In this work, we present several novel contributions to metric learning, both by way of formulation as well as solution methods. Our approach is motivated by agglomerative clustering with certain novel modifications that enable natural interpretation of the user-defined classes as clusters with the optimal metric. Our approach generalizes and improves upon leading methods by removing reliance on pre-designated “target neighbors,” “triplets,” and “similarity pairs.” Starting with the definition of a generalized metric that has the Mahalanobis metric as the second order term, we propose an objective function for metric selection that does not aim to isolate classes from each other like most previous work, but tries to distort the space minimally by aggregating co-class members into local clusters. Further, we formulate the problem as a mixed-integer optimization that can be solved efficiently for small/medium datasets and approximated for larger datasets. Another salient feature of our method is that it facilitates active learning by recommending precise regions to sample using the optimal metric to improve classification performance. These regions are indicated by boundary and outlier points of the dataset as defined by the metric. This targeted acquisition can significantly reduce computation and data acquisition by ensuring training data completeness, representativeness, and economy, which could also provide advantages in training data selection for other established methods like Deep Learning and Random Forests. We demonstrate classification and computational performance of our approach through several simple and intuitive examples, followed by results on real image and benchmark datasets.

[78] Scaling Up Quasi-Newton Algorithms: Communication Efficient Distributed SR1

Majid Jahani, MohammadReza Nazari, Sergey Rusakov, Albert S. Berahas, Martin Takáč
2020 Conference Paper 6th Annual Conference on Machine Learning, Optimization and Data Science (LOD), 2020

Abstract

In this paper, we present a scalable distributed implementation of the sampled LSR1 (S-LSR1) algorithm. First, we show that a naive distributed implementation of S-LSR1 requires multiple rounds of expensive communications at every iteration and thus is inefficient. We then propose DS-LSR1, a communication-efficient variant of the S-LSR1 method, that drastically reduces the amount of data communicated at every iteration, that has favorable work-load balancing across nodes and that is matrix-free and inverse-free. The proposed method scales well in terms of both the dimension of the problem and the number of data points. Finally, we illustrate the performance of DS-LSR1 on standard neural network training tasks.

[77] Reinforcement Learning based Multi-Robot Classification via Scalable Communication Structure

Guangyi Liu, Arash Amini, Martin Takáč, Héctor Muñoz-Avila, Nader Motee
2020 arXiv Preprint

Abstract

In the multi-robot collaboration domain, training with Reinforcement Learning (RL) can become intractable, and performance starts to deteriorate drastically as the number of robots increases. In this work, we proposed a distributed multi-robot learning architecture with a scalable communication structure capable of learning a robust communication policy for time-varying communication topology. We construct the communication structure with Long-Short Term Memory (LSTM) cells and star graphs, in which the computational complexity of the proposed learning algorithm scales linearly with the number of robots and suitable for application with a large number of robots. The proposed methodology is validated with a map classification problem in the simulated environment. It is shown that the proposed architecture achieves a comparable classification accuracy with the centralized methods, maintains high performance with various numbers of robots without additional training cost, and robust to hacking and loss of the robots in the network.

[76] DynNet: Physics-based neural architecture design for linear and nonlinear structural response modeling and prediction

2020 Journal Paper Engineering Structures

Abstract

Data-driven models for predicting dynamic responses of linear and nonlinear systems are of great importance due to their wide application from probabilistic analysis to inverse problems such as system identification and damage diagnosis. In this study, a physics-based recurrent neural network model is designed that is able to learn the dynamics of linear and nonlinear multiple degrees of freedom systems given a ground motion. The model is able to estimate a complete set of responses, including displacement, velocity, acceleration, and internal forces. Compared to the most advanced counterparts, this model requires a smaller number of trainable variables while the accuracy of predictions is higher for long trajectories. In addition, the architecture of the recurrent block is inspired by differential equation solver algorithms and it is expected that this approach yields more generalized solutions. In the training phase, we propose multiple novel techniques to dramatically accelerate the learning process using smaller datasets, such as hardsampling, utilization of trajectory loss function, and implementation of a trust-region approach. Numerical case studies are conducted to examine the strength of the network to learn different nonlinear behaviors. It is shown that the network is able to capture different nonlinear behaviors of dynamic systems with very high accuracy and with no need for prior information or very large datasets.

[75] Alternating Maximization: Unifying Framework for 8 Sparse PCA Formulations and Efficient Parallel Codes

Peter Richtárik, Majid Jahani, Selin Damla Ahipasaoglu, Martin Takáč
2020 Journal Paper Optimization and Engineering (OPTE)

Abstract

Given a multivariate data set, sparse principal component analysis (SPCA) aims to extract several linear combinations of the variables that together explain the variance in the data as much as possible, while controlling the number of nonzero loadings in these combinations. In this paper we consider 8 different optimization formulations for computing a single sparse loading vector; these are obtained by combining the following factors: we employ norms for measuring variance (L2, L1) and sparsity-inducing norms (L0, L1), which are used in different ways (constraint, penalty). Three of our formulations, notably the one with L0 constraint and L1 variance, have not been considered in the literature. We give a unifying reformulation which we propose to solve via a natural alternating maximization (AM) method. We show the the AM method is nontrivially equivalent to GPower (Journee et al; JMLR :517--553, 2010) for all our formulations. Besides this, we provide 24 efficient parallel SPCA implementations: 3 codes (multi-core, GPU and cluster) for each of the 8 problems. Parallelism in the methods is aimed at

[74] Inexact SARAH Algorithm for Stochastic Optimization

Lam Minh Nguyen, Katya Scheinberg, Martin Takáč
2020 Journal Paper Optimization Methods and Software (GOMS)

Abstract

We develop and analyze a variant of variance reducing stochastic gradient algorithm, known as SARAH, which does not require computation of the exact gradient. Thus this new method can be applied to general expectation minimization problems rather than only finite sum problems. While the original SARAH algorithm, as well as its predecessor, SVRG, require an exact gradient computation on each outer iteration, the inexact variant of SARAH (iSARAH), which we develop here, requires only stochastic gradient computed on a mini-batch of sufficient size. The proposed method combines variance reduction via sample size selection and iterative stochastic gradient updates. We analyze the convergence rate of the algorithms for strongly convex, convex, and nonconvex cases with appropriate mini-batch size selected for each case. We show that with an additional, reasonable, assumption iSARAH achieves the best known complexity among stochastic methods in the case of general convex case stochastic value functions.

[73] A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning

Aryan Mokhtari, Alec Koppel, Martin Takáč, Alejandro Ribeiro
2020 Journal Paper Journal of Machine Learning Research

Abstract

We consider learning problems over training sets in which both, the number of training examples and the dimension of the feature vectors, are large. To solve these problems we propose the random parallel stochastic algorithm (RAPSA). We call the algorithm random parallel because it utilizes multiple parallel processors to operate on a randomly chosen subset of blocks of the feature vector. RAPSA is doubly stochastic since each processor utilizes a random set of functions to compute the stochastic gradient associated with a randomly chosen sets of variable coordinates. Algorithms that are parallel in either of these dimensions exist, but RAPSA is the first attempt at a methodology that is parallel in both the selection of blocks and the selection of elements of the training set. In RAPSA, processors utilize the randomly chosen functions to compute the stochastic gradient component associated with a randomly chosen block. The technical contribution of this paper is to show that this minimally coordinated algorithm converges to the optimal classifier when the training objective is strongly convex. Moreover, we present an accelerated version of RAPSA (ARAPSA) that incorporates the objective function curvature information by premultiplying the descent direction by a Hessian approximation matrix. We further extend the results for asynchronous settings and show that if the processors perform their updates without any coordination the algorithms are still convergent to the optimal argument. RAPSA and its extensions are then numerically evaluated on a linear estimation problem and a binary image classification task using the MNIST handwritten digit dataset.

[72] Uncertainty quantification in digital image correlation for experimental evaluation of deep learning based damage diagnostic

Nur Sila Gulgec, Martin Takáč, Shamim N. Pakzad
2020 Journal Paper Structure and Infrastructure Engineering

[71] Constrained Combinatorial Optimization with Reinforcement Learning

Ruben Solozabal, Josu Ceberio, Martin Takáč
2020 arXiv Preprint

Abstract

This paper presents a framework to tackle constrained combinatorial optimization problems using deep Reinforcement Learning (RL). To this end, we extend the Neural Combinatorial Optimization (NCO) theory in order to deal with constraints in its formulation. Notably, we propose defining constrained combinatorial problems as fully observable Constrained Markov Decision Processes (CMDP). In that context, the solution is iteratively constructed based on interactions with the environment. The model, in addition to the reward signal, relies on penalty signals generated from constraint dissatisfaction to infer a policy that acts as a heuristic algorithm. Moreover, having access to the complete state representation during the optimization process allows us to rely on memory-less architectures, enhancing the results obtained in previous sequence-to-sequence approaches. Conducted experiments on the constrained Job Shop and Resource Allocation problems prove the superiority of the proposal for computing rapid solutions when compared to classical heuristic, metaheuristic, and Constraint Programming (CP) solvers.

[70] Finite Difference Neural Networks: Fast Prediction of Partial Differential Equations

Zheng Shi, Nur Sila Gulgec, Albert S. Berahas, Shamim N. Pakzad, Martin Takáč
2020 Conference Paper 19th IEEE International Conference on Machine Learning and Applications

Abstract

Discovering the underlying behavior of complex systems is an important topic in many science and engineering disciplines. In this paper, we propose a novel neural network framework, finite difference neural networks (FDNet), to learn partial differential equations from data. Specifically, our proposed finite difference inspired network is designed to learn the underlying governing partial differential equations from trajectory data, and to iteratively estimate the future dynamical behavior using only a few trainable parameters. We illustrate the performance (predictive power) of our framework on the heat equation, with and without noise and/or forcing, and compare our results to the Forward Euler method. Moreover, we show the advantages of using a Hessian-Free Trust Region method to train the network.

[69] Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory

Peter Richtárik, Martin Takáč
2020 Journal Paper SIAM Journal on Matrix Analysis and Applications (SIMAX)

Abstract

We develop a family of reformulations of an arbitrary consistent linear system into a stochastic problem. The reformulations are governed by two user-defined parameters: a positive definite matrix defining a norm, and an arbitrary discrete or continuous distribution over random matrices. Our reformulation has several equivalent interpretations, allowing for researchers from various communities to leverage their domain specific insights. In particular, our reformulation can be equivalently seen as a stochastic optimization problem, stochastic linear system, stochastic fixed point problem and a probabilistic intersection problem. We prove sufficient, and necessary and sufficient conditions for the reformulation to be exact. Further, we propose and analyze three stochastic algorithms for solving the reformulated problem---basic, parallel and accelerated methods---with global linear convergence rates. The rates can be interpreted as condition numbers of a matrix which depends on the system matrix and on the reformulation parameters. This gives rise to a new phenomenon which we call stochastic preconditioning, and which refers to the problem of finding parameters (matrix and distribution) leading to a sufficiently small condition number. Our basic method can be equivalently interpreted as stochastic gradient descent, stochastic Newton method, stochastic proximal point method, stochastic fixed point method, and stochastic projection method, with fixed stepsize (relaxation parameter), applied to the reformulations.

[68] Structural sensing with deep learning: Strain estimation from acceleration data for fatigue assessment

Nur Sila Gulgec, Martin Takáč, Shamim N. Pakzad
2020 Journal Paper Computer-Aided Civil and Infrastructure Engineering

Abstract

Many of the civil structures experience significant vibrations and repeated stress cycles during their life span. These conditions are the bases for fatigue analysis to accurately establish the remaining fatigue life of the structures that ideally requires a full-field strain assessment of the structures over years of data collection. Traditional inspection methods collect strain measurements by using strain gauges for a short time span and extrapolate the measurements in time; nevertheless, large-scale deployment of strain gauges is expensive and laborious as more spatial information is desired. This paper introduces a deep learning-based approach to replace this high cost by employing inexpensive data coming from acceleration sensors. The proposed approach utilizes collected acceleration responses as inputs to a multistage deep neural network based on long short-term memory and fully connected layers to estimate the strain responses. The memory requirement of training long acceleration sequences is reduced by proposing a novel training strategy. In the evaluation of the method, a laboratory-scale horizontally curved girder subjected to various loading scenarios is tested.

[67] Randomized sketch descent methods for non-separable linearly constrained optimization

Ion Necoara, Martin Takáč
2020 Journal Paper IMA Journal of Numerical Analysis

Abstract

In this paper we consider large-scale smooth optimization problems with multiple linear coupled constraints. Due to the non-separability of the constraints, arbitrary random sketching would not be guaranteed to work. Thus, we first investigate necessary and sufficient conditions for the sketch sampling to have well-defined algorithms. Based on these sampling conditions we developed new sketch descent methods for solving general smooth linearly constrained problems, in particular, random sketch descent and accelerated random sketch descent methods. From our knowledge, this is the first convergence analysis of random sketch descent algorithms for optimization problems with multiple non-separable linear constraints. For the general case, when the objective function is smooth and non-convex, we prove for the non-accelerated variant sublinear rate in expectation for an appropriate optimality measure. In the smooth convex case, we derive for both algorithms, non-accelerated and accelerated random sketch descent, sublinear convergence rates in the expected values of the objective function. Additionally, if the objective function satisfies a strong convexity type condition, both algorithms converge linearly in expectation. In special cases, where complexity bounds are known for some particular sketching algorithms, such as coordinate descent methods for optimization problems with a single linear coupled constraint, our theory recovers the best-known bounds. We also show that when random sketch is sketching the coordinate directions randomly produces better results than the fixed selection rule. Finally, we present some numerical examples to illustrate the performances of our new algorithms.

[66] Modal Identification of Bridges using Mobile Sensors with Sparse Vibration Data

2020 Journal Paper ASCE's Journal of Engineering Mechanics

[65] Efficient Distributed Hessian Free Algorithm for Large-scale Empirical Risk Minimization via Accumulating Sample Strategy

Majid Jahani, Xi He, Chenxin Ma, Aryan Mokhtari, Dheevatsa Mudigere, Alejandro Ribeiro, Martin Takáč
2020 Conference Paper AISTATS 2020

Abstract

In this paper, we propose a Distributed Accumulated Newton Conjugate gradiEnt (DANCE) method in which sample size is gradually increasing to quickly obtain a solution whose empirical loss is under satisfactory statistical accuracy. Our proposed method is multistage in which the solution of a stage serves as a warm start for the next stage which contains more samples (including the samples in the previous stage). The proposed multistage algorithm reduces the number of passes over data to achieve the statistical accuracy of the full training set. Moreover, our algorithm in nature is easy to be distributed and shares the strong scaling property indicating that acceleration is always expected by using more computing nodes. Various iteration complexity results regarding descent direction computation, communication efficiency and stopping criteria are analyzed under convex setting. Our numerical results illustrate that the proposed method outperforms other comparable methods for solving learning problems including neural networks.

[64] Experimental Study on Digital Image Correlation for Deep Learning-Based Damage Diagnostic

Nur Sila Gulgec, Martin Takáč, Shamim N. Pakzad
2020 Conference Paper Dynamics of Civil Structures, Volume 2 pp. 205-210

Abstract

Large quantities of data which contain detailed condition information over an extended period of time should be utilized to prioritize infrastructure repairs. As the temporal and spatial resolution of monitoring data drastically increase by advances in sensing technology, structural health monitoring applications reach the thresholds of big data. Deep neural networks are ideally suited to use large representative training datasets to learn complex damage features. In the previous study of authors, a real-time deep learning platform was developed to solve damage detection and localization challenge. The network was trained by using simulated structural connection mimicking the real test object with a variety of loading cases, damage scenarios, and measurement noise levels for successful and robust diagnosis of damage. In this study, the proposed damage diagnosis platform is validated by using temporally and spatially dense data collected by Digital Image Correlation (DIC) from the specimen. Laboratory testing of the specimen with induced damage condition is performed to evaluate the performance and efficiency of damage detection and localization approach.

[63] Distributed Fixed Point Methods with Compressed Iterates

Selim Chraibi, Ahmed Khaled, Dmitry Kovalev, Peter Richtárik, Adil Salim, Martin Takáč
2019 arXiv Preprint

Abstract

We propose basic and natural assumptions under which iterative optimization methods with compressed iterates can be analyzed. This problem is motivated by the practice of federated learning, where a large model stored in the cloud is compressed before it is sent to a mobile device, which then proceeds with training based on local data. We develop standard and variance reduced methods, and establish communication complexity bounds. Our algorithms are the first distributed methods with compressed iterates, and the first fixed point methods with compressed iterates.

[62] High Resolution Bridge Mode Shape Identification Via Matrix Completion Approach

2019 Conference Paper Structural Health Monitoring 2019

Abstract

Mathematical platforms that are able to estimate modal characteristics from mobile sensors are not much investigated. Mobile sensors collect spatially dense data compared to limited spatial density of fixed sensor networks. This feature potentially enables to refine the identified natural mode shapes as well as more robust estimations of other modal characteristics, e.g., natural frequencies and damping ratios. In this paper, highresolution natural mode shape identification of a simple-span bridge using mobile data is investigated. A recent methodology developed by authors is used to reconstruct a full bridge response matrix from mobile data. Matrix completion technique approximates unobserved signals at many virtual stationary locations via a convex optimization procedure. This reconstructed data is then fed in batches into available output-only system identification algorithms to extract modal properties. Mode shape refinement then is performed by superimposing identified results of all considered batches. The accuracy of the matrix completion for signal reconstruction was shown before, however, the performance of the estimated signal for modal identification has not been demonstrated yet. In this study, a numerical case study is examined to compare identification results from this procedure compared to a conventional sensing network consists of fixed sensors.

[61] Accelerating Distributed Stochastic L-BFGS by sampled 2nd-Order Information

Jie Liu, Yu Rong, Martin Takáč, Junzhou Huang
2019 Conference Paper Beyond First Order Methods in ML Workshop @ NeurIPS 2019

[60] FD-Net with Auxiliary Time Steps: Fast Prediction of PDEs using Hessian-Free Trust-Region Methods

Nur Sila Gulgec, Zheng Shi, Neil Deshmukh, Shamim Pakzad, Martin Takáč
2019 Conference Paper Beyond First Order Methods in ML Workshop @ NeurIPS 2019

Abstract

Discovering the underlying physical behavior of complex systems is a crucial, but less well-understood topic in many engineering disciplines. This study proposes a finite-difference inspired convolutional neural network framework to learn hidden partial differential equations from given data and iteratively estimate future dynamical behavior. The methodology designs the filter sizes such that they mimic the finite difference between the neighboring points. By learning the governing equation, the network predicts the future evolution of the solution by using only a few trainable parameters. In this paper, we provide numerical results to compare the efficiency of the second-order Trust-Region Conjugate Gradient (TRCG) method with the first-order ADAM optimizer.

[59] Sampled Quasi-Newton Methods for Deep Learning

Albert S. Berahas, Majid Jahani, Martin Takáč
2019 Conference Paper Optimization and Machine Learning @ NeurIPS 2019

Abstract

We present two sampled quasi-Newton methods: sampled LBFGS and sampled LSR1. Contrary to the classical variants that sequentially build Hessian approximations, our proposed methods sample points randomly around the current iterate to produce these approximations. As a result, the approximations constructed make use of more reliable (recent and local) information, and do not depend on past information that could be significantly stale. We provide convergence guarantees for our proposed methods, and illustrate their performance in practice.

[58] Grow Your Samples and Optimize Better via Distributed Newton CG and Accumulating Strategy

Majid Jahani, Xi He, Chenxin Ma, Aryan Mokhtari, Dheevatsa Mudigere, Alejandro Ribeiro, Martin Takáč
2019 Conference Paper Beyond First Order Methods in ML Workshop @ NeurIPS 2019

[57] A Layered Architecture for Active Perception: Image Classification using Deep Reinforcement Learning

Hossein K. Mousavi, Guangyi Liu, Weihang Yuan, Martin Takáč, Héctor Muñoz-Avila, Nader Motee
2019 arXiv Preprint

Abstract

We propose a planning and perception mechanism for a robot (agent), that can only observe the underlying environment partially, in order to solve an image classification problem. A three-layer architecture is suggested that consists of a meta-layer that decides the intermediate goals, an action-layer that selects local actions as the agent navigates towards a goal, and a classification-layer that evaluates the reward and makes a prediction. We design and implement these layers using deep reinforcement learning. A generalized policy gradient algorithm is utilized to learn the parameters of these layers to maximize the expected reward. Our proposed methodology is tested on the MNIST dataset of handwritten digits, which provides us with a level of explainability while interpreting the agent's intermediate goals and course of action.

[56] A Robust Multi-Batch L-BFGS Method for Machine Learning

Albert S. Berahas, Martin Takáč
2019 Journal Paper Optimization Methods and Software

Abstract

This paper describes an implementation of the L-BFGS method designed to deal with two adversarial situations. The first occurs in distributed computing environments where some of the computational nodes devoted to the evaluation of the function and gradient are unable to return results on time. A similar challenge occurs in a multi-batch approach in which the data points used to compute function and gradients are purposely changed at each iteration to accelerate the learning process. Difficulties arise because L-BFGS employs gradient differences to update the Hessian approximations, and when these gradients are computed using different data points the updating process can be unstable. This paper shows how to perform stable quasi-Newton updating in the multi-batch setting, studies the convergence properties for both convex and nonconvex functions, and illustrates the behavior of the algorithm in a distributed computing platform on binary classification logistic regression and neural network training problems that arise in machine learning.

[55] An Accelerated Communication-Efficient Primal-Dual Optimization Framework for Structured Machine Learning

Chenxin Ma, Martin Jaggi, Frank E. Curtis, Nathan Srebro, Martin Takáč
2019 Journal Paper Optimization Methods and Software

Abstract

Distributed optimization algorithms are essential for training machine learning models on very large-scale datasets. However, they often suffer from communication bottlenecks. Confronting this issue, a communication-efficient primal-dual coordinate ascent framework (CoCoA) and its improved variant CoCoA+ have been proposed, achieving a convergence rate of (1/t) for solving empirical risk minimization problems with Lipschitz continuous losses. In this paper, an accelerated variant of CoCoA+ is proposed and shown to possess a convergence rate of (1/t2) in terms of reducing suboptimality. The analysis of this rate is also notable in that the convergence rate bounds involve constants that, except in extreme cases, are significantly reduced compared to those previously provided for CoCoA+. The results of numerical experiments are provided to show that acceleration can lead to significant performance gains.

[54] New Convergence Aspects of Stochastic Gradient Algorithms

Lam Minh Nguyen, Phuong Ha Nguyen, Peter Richtárik, Katya Scheinberg, Martin Takáč, Marten van Dijk
2019 Journal Paper minor revision in Journal of Machine Learning Research (JMLR)

Abstract

The classical convergence analysis of SGD is carried out under the assumption that the norm of the stochastic gradient is uniformly bounded. While this might hold for some loss functions, it is violated for cases where the objective function is strongly convex. In Bottou et al. (2016), a new analysis of convergence of SGD is performed under the assumption that stochastic gradients are bounded with respect to the true gradient norm. We show that for stochastic problems arising in machine learning such bound always holds; and we also propose an alternative convergence analysis of SGD with diminishing learning rate regime, which results in more relaxed conditions than those in Bottou et al. (2016). We then move on the asynchronous parallel setting, and prove convergence of Hogwild! algorithm in the same regime in the case of diminished learning rate. It is well-known that SGD converges if a sequence of learning rates $\{η_t\}$ satisfies $\sum_{t=0}^\infty η_t \to \infty$ and $\sum_{t=0}^\infty η^2_t < \infty$. We show the convergence of SGD for strongly convex objective function without using bounded gradient assumption when {ηt} is a diminishing sequence and ∑∞t=0ηt→∞. In other words, we extend the current state-of-the-art class of learning rates satisfying the convergence of SGD.

[53] Multi-Agent Image Classification via Reinforcement Learning

2019 Conference Paper Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019)

Abstract

We investigate a classification problem using multiple mobile agents that are capable of collecting (partial) pose-dependent observations of an unknown environment. The objective is to classify an image (e.g, map of a large area) over a finite time horizon. We propose a network architecture on how agents should form a local belief, take local actions, extract relevant features and specification from their raw partial observations. Agents are allowed to exchange information with their neighboring agents and run a decentralized consensus protocol to update their own beliefs. It is shown how reinforcement learning techniques can be utilized to achieve decentralized implementation of the classification problem. Our experimental results on MNIST handwritten digit dataset demonstrates the effectiveness of our proposed framework.

[52] Applying Deep Learning to the Newsvendor Problem

Afshin OroojlooyJadid, Lawrence Snyder, Martin Takáč
2019 Journal Paper IISE Transactions

Abstract

The newsvendor problem is one of the most basic and widely applied inventory models. There are numerous extensions of this problem. One important extension is the multi-item newsvendor problem, in which the demand of each item may be correlated with that of other items. If the joint probability distribution of the demand is known, the problem can be solved analytically. However, approximating the probability distribution is not easy and is prone to error; therefore, the resulting solution to the newsvendor problem may be not optimal. To address this issue, we propose an algorithm based on deep learning that optimizes the order quantities for all products based on features of the demand data. Our algorithm integrates the forecasting and inventory-optimization steps, rather than solving them separately as is typically done. The algorithm does not require the knowledge of the probability distributions of the demand. Numerical experiments on real-world data suggest that our algorithm outperforms other approaches, including data-driven and SVM approaches, especially for demands with high volatility.

[51] Don’t Forget Your Teacher: A Corrective Reinforcement Learning Framework

MohammadReza Nazari, Majid Jahani, Lawrence V. Snyder, Martin Takáč
2019 arXiv Preprint arXiv

Abstract

Although reinforcement learning (RL) can provide reliable solutions in many settings, practitioners are often wary of the discrepancies between the RL solution and their status quo procedures. Therefore, they may be reluctant to adapt to the novel way of executing tasks proposed by RL. On the other hand, many real-world problems require relatively small adjustments from the status quo policies to achieve improved performance. Therefore, we propose a student-teacher RL mechanism in which the RL (the student'') learns to maximize its reward, subject to a constraint that bounds the difference between the RL policy and the teacher'' policy. The teacher can be another RL policy (e.g., trained under a slightly different setting), the status quo policy, or any other exogenous policy. We formulate this problem using a stochastic optimization model and solve it using a primal-dual policy gradient algorithm. We prove that the policy is asymptotically optimal. However, a naive implementation suffers from high variance and convergence to a stochastic optimal policy. With a few practical adjustments to address these issues, our numerical experiments confirm the effectiveness of our proposed method in multiple GridWorld scenarios.

[50] Entropy-Penalized Semidefinite Programming

Jakub Marecek, Mikhail Krechetov, Yury Maximov, Martin Takáč
2019 Conference Paper IJCA2019

Abstract

Low-rank methods for semidefinite programming (SDP) have gained considerable popularity, especially in machine learning applications. Their analyses often assume the use of determinant-based regularisers, which are rarely implemented, due to the run-time cubic in the dimension in conventional implementations of the computation of their gradient. We extend the convergence analyses of low-rank methods to a wide class of regularisers. Further, we show that the gradient of a well-known regulariser can be computed in time linear in the dimension, which makes the regularisation practical. Our results are illustrated on the Max-Cut SDP relaxation.

[49] TOP-SPIN: TOPic discovery via Sparse Principal component INterference

Martin Takáč, Selin Damla Ahipasaoglu, Ngai-Man Cheung, Peter Richtárik
2019 Conference Paper Springer Proceedings in Mathematics & Statistics (MOPTA)

Abstract

We propose a novel topic discovery algorithm for unlabeled images based on the bag-of-words (BoW) framework. We first extract a dictionary of visual words and subsequently for each image compute a visual word occurrence histogram. We view these histograms as rows of a large matrix from which we extract sparse principal components (PCs). Each PC identifies a sparse combination of visual words which co-occur frequently in some images but seldom appear in others. Each sparse PC corresponds to a topic, and images whose interference with the PC is high belong to that topic, revealing the common parts possessed by the images. We propose to solve the associated sparse PCA problems using an Alternating Maximization (AM) method, which we modify for purpose of efficiently extracting multiple PCs in a deflation scheme. Our approach attacks the maximization problem in sparse PCA directly and is scalable to high-dimensional data. Experiments on automatic topic discovery and category prediction demonstrate encouraging performance of our approach.

[48] Convolutional Neural Network Approach for Robust Structural Damage Detection and Localization

Nur Sila Gulgec, Martin Takáč, Shamim N. Pakzad
2019 Journal Paper Journal of Computing in Civil Engineering (Volume 33 Issue 3 - May 2019)

Abstract

Damage diagnosis has been a challenging inverse problem in structural health monitoring. The main difficulty is characterizing the unknown relation between the measurements and damage patterns (i.e., damage indicator selection). Such damage indicators would ideally be able to identify the existence, location, and severity of damage. Therefore, this procedure requires complex data processing algorithms and dense sensor arrays, which brings computational intensity with it. To address this limitation, this paper introduces convolutional neural network (CNN), which is one of the major breakthroughs in image recognition, to the damage detection and localization problem. The CNN technique has the ability to discover abstract features and complex classifier boundaries that are able to distinguish various attributes of the problem. In this paper, a CNN topology was designed to classify simulated damaged and healthy cases and localize the damage when it exists. The performance of the proposed technique was evaluated through the finite-element simulations of undamaged and damaged structural connections. Samples were trained by using strain distributions as a consequence of various loads with several different crack scenarios. Completely new damage setups were introduced to the model during the testing process. Based on the findings of the proposed study, the damage diagnosis and localization were achieved with high accuracy, robustness, and computational efficiency.

[47] Distributed Learning with Compressed Gradient Differences

Konstantin Mishchenko, Eduard Gorbunov, Martin Takáč, Peter Richtárik
2019 arXiv Preprint

Abstract

Training very large machine learning models requires a distributed computing approach, with communication of the model updates often being the bottleneck. For this reason, several methods based on the compression (e.g., sparsification and/or quantization) of the updates were recently proposed, including QSGD (Alistarh et al., 2017), TernGrad (Wen et al., 2017), SignSGD (Bernstein et al., 2018), and DQGD (Khirirat et al., 2018). However, none of these methods are able to learn the gradients, which means that they necessarily suffer from several issues, such as the inability to converge to the true optimum in the batch mode, inability to work with a nonsmooth regularizer, and slow convergence rates. In this work we propose a new distributed learning method---DIANA---which resolves these issues via compression of gradient differences. We perform a theoretical analysis in the strongly convex and nonconvex settings and show that our rates are vastly superior to existing rates. Our analysis of block-quantization and differences between ℓ2 and ℓ∞ quantization closes the gaps in theory and practice. Finally, by applying our analysis technique to TernGrad, we establish the first convergence rate for this method.

[46] On the Acceleration of L-BFGS with Second-Order Information and Stochastic Batches

Jie Liu, Yu Rong, Martin Takáč, Junzhou Huang
2018 arXiv Preprint

Abstract

This paper proposes a framework of L-BFGS based on the (approximate) second-order information with stochastic batches, as a novel approach to the finite-sum minimization problems. Different from the classical L-BFGS where stochastic batches lead to instability, we use a smooth estimate for the evaluations of the gradient differences while achieving acceleration by well-scaling the initial Hessians. We provide theoretical analyses for both convex and nonconvex cases. In addition, we demonstrate that within the popular applications of least-square and cross-entropy losses, the algorithm admits a simple implementation in the distributed environment. Numerical experiments support the efficiency of our algorithms.

[45] CoCoA: A General Framework for Communication-Efficient Distributed Optimization

Virginia Smith, Simone Forte, Chenxin Ma, Martin Takáč, Michael I. Jordan, Martin Jaggi
2018 Journal Paper Journal of Machine Learning Research (JMLR)

Abstract

The scale of modern datasets necessitates the development of efficient distributed optimization methods for machine learning. We present a general-purpose framework for the distributed environment, CoCoA, that has an efficient communication scheme and is applicable to a wide variety of problems in machine learning and signal processing. We extend the framework to cover general non-strongly convex regularizers, including L1-regularized problems like lasso, sparse logistic regression, and elastic net regularization, and show how earlier work can be derived as a special case. We provide convergence guarantees for the class of convex regularized loss minimization objectives, leveraging a novel approach in handling non-strongly convex regularizers and non-smooth loss functions. The resulting framework has markedly improved performance over state-of-the-art methods, as we illustrate with an extensive set of experiments on real distributed datasets.

[44] Matrix Completion under Interval Uncertainty: Highlights

Jakub Marecek, Peter Richtárik, Martin Takáč
2018 Conference Paper The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases

[43] Dual Free Adaptive Minibatch SDCA for Empirical Risk Minimization

Xi He, Rachael Tappenden, Martin Takáč
2018 Journal Paper Frontiers in Applied Mathematics and Statistics, section Optimization

Abstract

In this paper we develop an adaptive dual free Stochastic Dual Coordinate Ascent (adfSDCA) algorithm for regularized empirical risk minimization problems. This is motivated by the recent work on dual free SDCA of Shalev-Shwartz [1]. The novelty of our approach is that the coordinates to update at each iteration are selected non-uniformly from an adaptive probability distribution, and this extends the previously mentioned work which only allowed for a uniform selection of “dual” coordinates from a fixed probability distribution. We describe an efficient iterative procedure for generating the non-uniform samples, where the scheme selects the coordinate with the greatest potential to decrease the sub-optimality of the current iterate. We also propose a heuristic variant of adfSDCA that is more aggressive than the standard approach. Furthermore, in order to utilize multi-core machines we consider a mini-batch adfSDCA algorithm and develop complexity results that guarantee the algorithm's convergence. The work is concluded with several numerical experiments to demonstrate the practical benefits of the proposed approach.

[42] Anomaly Detection in Manufacturing Systems Using Structured Neural Networks

Jie Liu, Jianlin Guo, Philip Orlik, Masahiko Shibata, Daiki Nakahara, Satoshi Mii, Martin Takáč
2018 Conference Paper The 13th World Congress on Intelligent Control and Automation (WCICA 2018)

Abstract

This paper proposes innovative anomaly detection technologies for manufacturing systems. We combine the event ordering relationship based structuring technique and the deep neural networks to develop the structured neural networks for anomaly detection. The event ordering relationship based neural network structuring process is performed before neural network training process and determines important neuron connections and weight initialization. It reduces the complexity of the neural networks and can improve anomaly detection accuracy. The structured time delay neural network (TDNN) is introduced for anomaly detection via supervised learning. To detect anomaly through unsupervised learning, we propose the structured autoencoder. The proposed structured neural networks outperform the unstructured neural networks in terms of anomaly detection accuracy and can reduce test error by 20%. Compared with popular methods such as one-class SVM, decision trees, and distance-based algorithms, our structured neural networks can reduce anomaly detection misclassification error by as much as 64%.

[41] Innovative Sensing by Using Deep Learning Framework

Nur Sila Gulgec, Martin Takáč, Shamim N. Pakzad
2018 Conference Paper In Dynamics of Civil Structures

Abstract

Structures experience large vibrations and stress variations during their life cycles. This causes reduction in their load-carrying capacity which is the main design criteria for many structures. Therefore, it is important to accurately establish the performance of structures after construction that often needs full-field strain or stress measurements. Many traditional inspection methods collect strain measurements by using wired strain gauges. These strain gauges carry a high installation cost and have high power demand. In contrast, this paper introduces a new methodology to replace this high cost with utilizing inexpensive data coming from wireless sensor networks. The study proposes to collect acceleration responses coming from a structure and give them as an input to deep learning framework to estimate the stress or strain responses. The obtained stress or strain time series then can be used in many applications to better understand the conditions of the structures. In this paper, designed deep learning architecture consists of multi-layer neural networks and Long Short-Term Memory (LSTM). The network achieves to learn the relationship between input and output by exploiting the temporal dependencies of them. In the evaluation of the method, a three-story steel building is simulated by using various dynamic wind and earthquake loading scenarios. The acceleration time histories under these loading cases are utilized to predict the stress time series. The learned architecture is tested on acceleration time series that the structure has never experienced.

[40] Distributed Mini-Batch SDCA

Martin Takáč, Peter Richtárik, Nathan Srebro
2019 Journal Paper Journal of Machine Learning Research (JMLR) (to appear)

Abstract

We present an improved analysis of mini-batched stochastic dual coordinate ascent for regularized empirical loss minimization (i.e. SVM and SVM-type objectives). Our analysis allows for flexible sampling schemes, including where data is distribute across machines, and combines a dependence on the smoothness of the loss and/or the data spread (measured through the spectral norm).

[39] Reinforcement Learning for Solving the Vehicle Routing Problem

MohammadReza Nazari, Afshin Oroojlooy, Lawrence V. Snyder, Martin Takáč
2018 Conference Paper Neural Information Processing Systems (NeurIPS) 2018

Abstract

We present an end-to-end framework for solving Vehicle Routing Problem (VRP) using deep reinforcement learning. In this approach, we train a single model that finds near-optimal solutions for problem instances sampled from a given distribution, only by observing the reward signals and following feasibility rules. Our model represents a parameterized stochastic policy, and by applying a policy gradient algorithm to optimize its parameters, the trained model produces the solution as a sequence of consecutive actions in real time, without the need to re-train for every new problem instance. Our method is faster in both training and inference than a recent method that solves the Traveling Salesman Problem (TSP), with nearly identical solution quality. On the more general VRP, our approach outperforms classical heuristics on medium-sized instances in both solution quality and computation time (after training). Our proposed framework can be applied to variants of the VRP such as the stochastic VRP, and has the potential to be applied more generally to combinatorial optimization problems.

[38] SGD and Hogwild! Convergence Without the Bounded Gradients Assumption

Lam Minh Nguyen, Phuong Ha Nguyen, Marten van Dijk, Peter Richtárik, Katya Scheinberg, Martin Takáč
2018 Conference Paper ICML 2018 (35th International Conference on Machine Learning)

Abstract

Stochastic gradient descent (SGD) is the optimization algorithm of choice in many machine learning applications such as regularized empirical risk minimization and training deep neural networks. The classical analysis of convergence of SGD is carried out under the assumption that the norm of the stochastic gradient is uniformly bounded. While this might hold for some loss functions, it is always violated for cases where the objective function is strongly convex. In (Bottou et al.,2016) a new analysis of convergence of SGD is performed under the assumption that stochastic gradients are bounded with respect to the true gradient norm. Here we show that for stochastic problems arising in machine learning such bound always holds. Moreover, we propose an alternative convergence analysis of SGD with diminishing learning rate regime, which is results in more relaxed conditions that those in (Bottou et al.,2016). We then move on the asynchronous parallel setting, and prove convergence of the Hogwild! algorithm in the same regime, obtaining the first convergence results for this method in the case of diminished learning rate.

[37] A Deep Q-Network for the Beer Game, an Approach to Solve Inventory Optimization Problems

2017 Conference Paper Deep Reinforcement Learning Symposium @ Neural Information Processing Systems (NeurIPS) 2017

Abstract

The beer game is a decentralized, multi-agent, cooperative problem that can be modeled as a serial supply chain network in which agents cooperatively attempt to minimize the total cost of the network even though each agent can only observe its own local information. We develop a variant of the Deep Q-Network algorithm to solve this problem. Extensive numerical experiment show the effectiveness of our algorithm. Unlike most algorithms in literature, our algorithm does not have any limits on the parameter values, and it provides good solutions even if the agents do not follow a rational policy. The algorithm can be extended to other decentralized multi-agent cooperative games with partially observed information, which is a common type of situation in supply chain problems.

[36] Structural Damage Detection Using Convolutional Neural Networks

Nur Sila Gulgec, Martin Takáč, Shamim N. Pakzad
2017 Conference Paper In Model Validation and Uncertainty Quantification, Volume 3 (pp. 331-337). Springer, Cham.

Abstract

Detection of the deficiencies affecting the performance of the structures has been studied over the past few decades. How- ever, with the long-term data collection from dense sensor arrays, accurate damage diagnosis has become computationally challenging task. To address such problem, this paper introduces convolutional neural network (CNN), which has led to break- through results in computer vision, to the damage detection challenge. CNN technique has the ability to discover abstract features which are able to discriminate various aspect of interest. In our case, these features are used to classify “damaged” and “healthy” cases modeled through the finite element simulations. CNN is performed by using a Python library called Theano with the graphics processing unit (GPU) to achieve higher performance of these data-intensive calculations. The accuracy and sensitivity of the proposed technique are assessed with a cracked steel gusset connection model with multiplicative noise. Dur- ing the training procedure, strain distributions generated from different crack and loading scenarios are adopted. Completely unseen damage setups are introduced to the simulations while testing. Based on the findings of the proposed study, high accu- racy, robustness and computational efficiency are succeeded for the damage diagnosis.

[35] Structural damage diagnosis with time-varying loads using convolutional neural networks

Nur Sila Gulgec, Martin Takáč, Shamim N. Pakzad
2017 Conference Paper SMAR 2017 (the fourth International Conference on Smart Monitoring, Assessment and Rehabilitation of Civil Structures)

Abstract

Damage diagnosis of structures subjected to time-varying environmental and operational conditions has been a challenging task. This task involves a damage indicator selection to characterize the unknown relation between the measurements and damage patterns. The majority of the conventional methods adopt hand designed damage indicators that can be inefficient for some damage patterns and require manual effort to design. To address these challenges, this work uses a special kind of deep learning method called convolutional neural network (CNN) to learn complex damage features and create complex classifier boundaries. In the evaluation of the proposed methodology, multi-dimensional input samples are used — each dimension has an individual strain field resulting from a different force applied to the structure. The network is trained with several crack scenarios and the learned architecture is tested on completely unseen damage setups. Based on the findings of the paper, CNNs fed through multidimensional inputs improve the accuracy of the damage diagnosis. Furthermore, they give the opportunity to capture the behavior of the structures under variations in the loading conditions.

[34] Modeling and Optimization: Theory and Applications MOPTA, Bethlehem, PA, USA, August 2016 Selected Contributions

Martin Takáč, Tamás Terlaky
2017 Springer Proceedings in Mathematics & Statistics Springer Proceedings in Mathematics & Statistics

Abstract

This volume contains a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in Bethlehem, Pennsylvania, USA on August 17-19, 2016. The conference brought together a diverse group of researchers and practitioners, working on both theoretical and practical aspects of continuous or discrete optimization. Topics presented included algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and addressed the application of deterministic and stochastic optimization techniques in energy, finance, logistics, analytics, health, and other important fields. The contributions contained in this volume represent a sample of these topics and applications and illustrate the broad diversity of ideas discussed at the meeting.

[33] Large Scale Distributed Hessian-Free Optimization for Deep Neural Network

Xi He, Dheevatsa Mudigere, Mikhail Smelyanskiy, Martin Takáč
2016 Conference Paper AAAI Workshop on Distributed Machine Learning

Abstract

Training deep neural network is a high dimensional and a highly non-convex optimization problem. Stochastic gradient descent (SGD) algorithm and it's variations are the current state-of-the-art solvers for this task. However, due to non-covexity nature of the problem, it was observed that SGD slows down near saddle point. Recent empirical work claim that by detecting and escaping saddle point efficiently, it's more likely to improve training performance. With this objective, we revisit Hessian-free optimization method for deep networks. We also develop its distributed variant and demonstrate superior scaling potential to SGD, which allows more efficiently utilizing larger computing resources thus enabling large models and faster time to obtain desired solution. Furthermore, unlike truncated Newton method (Marten's HF) that ignores negative curvature information by using naive conjugate gradient method and Gauss-Newton Hessian approximation information - we propose a novel algorithm to explore negative curvature direction by solving the sub-problem with stabilized bi-conjugate method involving possible indefinite stochastic Hessian information. We show that these techniques accelerate the training process for both the standard MNIST dataset and also the TIMIT speech recognition problem, demonstrating robust performance with upto an order of magnitude larger batch sizes. This increased scaling potential is illustrated with near linear speed-up on upto 16 CPU nodes for a simple 4-layer network.

[32] Distributed Inexact Damped Newton Method: Data Partitioning and Load-Balancing

Chenxin Ma, Martin Takáč
2017 Conference Paper AAAI Workshop on Distributed Machine Learning

Abstract

In this paper we study inexact dumped Newton method implemented in a distributed environment. We start with an original DiSCO algorithm [Communication-Efficient Distributed Optimization of Self-Concordant Empirical Loss, Yuchen Zhang and Lin Xiao, 2015]. We will show that this algorithm may not scale well and propose an algorithmic modifications which will lead to less communications, better load-balancing and more efficient computation. We perform numerical experiments with an regularized empirical loss minimization instance described by a 273GB dataset.

[31] Stock-out Prediction in Multi-echelon Networks

Afshin OroojlooyJadid, Lawrence Snyder, Martin Takáč
2017 arXiv Preprint

Abstract

In multi-echelon inventory systems the performance of a given node is affected by events that occur at many other nodes and at many other time periods. For example, a supply disruption upstream will have an effect on downstream, customer-facing nodes several periods later as the disruption \"cascades\" through the system. There is very little research on stock-out prediction in single-echelon systems and (to the best of our knowledge) none on multi-echelon systems. However, in real the world, it is clear that there is significant interest in techniques for this sort of stock-out prediction. Therefore, our research aims to fill this gap by using DNN to predict stock-outs in multi-echelon supply chains.

[30] A Coordinate-Descent Algorithm for Tracking Solutions in Time-Varying Optimal Power Flows

Jie Liu, Jakub Marecek, Andrea Simonetto, Martin Takáč
2018 Conference Paper 20th Power Systems Computation Conference

Abstract

Consider a polynomial optimisation problem, whose instances vary continuously over time. We propose to use a coordinate-descent algorithm for solving such time-varying optimisation problems. In particular, we focus on relaxations of transmission-constrained problems in power systems. On the example of the alternating-current optimal power flows (ACOPF), we bound the difference between the current approximate optimal cost generated by our algorithm and the optimal cost for a relaxation using the most recent data from above by a function of the properties of the instance and the rate of change to the instance over time. We also bound the number of floating-point operations that need to be performed between two updates in order to guarantee the error is bounded from above by a given constant.

[29] On the Complexity of Parallel Coordinate Descent

Rachael Tappenden, Martin Takáč, Peter Richtárik
2017 Journal Paper Optimization Methods and Software

Abstract

In this work we study the parallel coordinate descent method (PCDM) proposed by Richtárik and Takac [26] for minimizing a regularized convex function. We adopt elements from the work of Xiao and Lu [39], and combine them with several new insights, to obtain sharper iteration complexity results for PCDM than those presented in [26]. Moreover, we show that PCDM is monotonic in expectation, which was not confirmed in [26], and we also derive the first high probability iteration complexity result where the initial levelset is unbounded.

[28] Hybrid Methods in Solving Alternating-Current Optimal Power Flows

Alan C. Liddell, Jie Liu, Jakub Marecek, Martin Takáč
2017 Journal Paper IEEE Transactions on Smart Grid

Abstract

Many steady-state problems in power systems, including rectangular power-voltage formulations of optimal power flows in the alternating-current model (ACOPF), can be cast as polynomial optimisation problems (POP). For a POP, one can derive strong convex relaxations, or rather hierarchies of ever stronger, but ever larger relaxations. We study means of switching from solving the convex relaxation to Newton method working on a non-convex Lagrangian of the POP.

[27] Distributed Optimization with Arbitrary Local Solvers

Chenxin Ma, Jakub Konečný, Martin Jaggi, Virginia Smith, Michael I. Jordan, Peter Richtárik, Martin Takáč
2017 Journal Paper Optimization Methods and Software

Abstract

With the growth of data and necessity for distributed optimization methods, solvers that work well on a single machine must be re-designed to leverage distributed computation. Recent work in this area has been limited by focusing heavily on developing highly specific methods for the distributed environment. These special-purpose methods are often unable to fully leverage the competitive performance of their well-tuned and customized single machine counterparts. Further, they are unable to easily integrate improvements that continue to be made to single machine methods. To this end, we present a framework for distributed optimization that both allows the flexibility of arbitrary solvers to be used on each (single) machine locally, and yet maintains competitive performance against other state-of-the-art special-purpose distributed methods. We give strong primal-dual convergence rate guarantees for our framework that hold for arbitrary local solvers. We demonstrate the impact of local solver selection both theoretically and in an extensive experimental comparison. Finally, we provide thorough implementation details for our framework, highlighting areas for practical performance gains.

[26] A low-rank coordinate-descent algorithm for semidefinite programming relaxations of optimal power flow

Jakub Mareček, Martin Takáč
2017 Journal Paper Optimization Methods and Software

Abstract

A novel rank-constrained re-formulation of alternating-current optimal power flow problem makes it possible to derive novel semidefinite programming (SDP) relaxations. For those, we develop a solver, which is often as fast as Matpower's interior point method, within the same accuracy.

[25] Stochastic Recursive Gradient Algorithm for Nonconvex Optimization

Lam Minh Nguyen, Jie Liu, Katya Scheinberg, Martin Takáč
2017 arXiv Preprint

Abstract

In this paper, we study and analyze the mini-batch version of StochAstic Recursive grAdient algoritHm (SARAH), a method employing the stochastic recursive gradient, for solving empirical loss minimization for the case of nonconvex losses. We provide a sublinear convergence rate (to stationary points) for general nonconvex functions and a linear convergence rate for gradient dominated functions, both of which have some advantages compared to other modern stochastic gradient algorithms for nonconvex losses.

[24] SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient

Lam Minh Nguyen, Jie Liu, Katya Scheinberg, Martin Takáč
2017 Conference Paper ICML 2017 (34th International Conference on Machine Learning)

Abstract

In this paper, we propose a StochAstic Recursive grAdient algoritHm (SARAH), as well as its practical variant SARAH+, as a novel approach to the finite-sum minimization problems. Different from the vanilla SGD and other modern stochastic methods such as SVRG, S2GD, SAG and SAGA, SARAH admits a simple recursive framework for updating stochastic gradient estimates; when comparing to SAG/SAGA, SARAH does not require a storage of past gradients. The linear convergence rate of SARAH is proven under strong convexity assumption. We also prove a linear convergence rate (in the strongly convex case) for an inner loop of SARAH, the property that SVRG does not possess. Numerical experiments demonstrate the efficiency of our algorithm.

[23] Projected Semi-Stochastic Gradient Descent Method with Mini-Batch Scheme under Weak Strong Convexity Assumption

Jie Liu, Martin Takáč
2017 Conference Paper Proceedings of MOPTA 2016

Abstract

We propose a projected semi-stochastic gradient descent method with mini-batch for improving both the theoretical complexity and practical performance of the general stochastic gradient descent method (SGD). We are able to prove linear convergence under weak strong convexity assumption. This requires no strong convexity assumption for minimizing the sum of smooth convex functions subject to a compact polyhedral set, which remains popular across machine learning community. Our PS2GD preserves the low-cost per iteration and high optimization accuracy via stochastic gradient variance-reduced technique, and admits a simple parallel implementation with mini-batches. Moreover, PS2GD is also applicable to dual problem of SVM with hinge loss.

[22] Linear Convergence of the Randomized Feasible Descent Method Under the Weak Strong Convexity Assumption

Chenxin Ma, Rachael Tappenden, Martin Takáč
2016 Journal Paper Journal of Machine Learning Research

Abstract

In this paper we generalize the framework of the feasible descent method (FDM) to a randomized (R-FDM) and a coordinate-wise random feasible descent method (RC-FDM) framework. We show that the famous SDCA algorithm for optimizing the SVM dual problem, or the stochastic coordinate descent method for the LASSO problem, fits into the framework of RC-FDM. We prove linear convergence for both R-FDM and RC-FDM under the weak strong convexity assumption. Moreover, we show that the duality gap converges linearly for RC-FDM, which implies that the duality gap also converges linearly for SDCA applied to the SVM dual problem.

[21] A Multi-Batch L-BFGS Method for Machine Learning

Albert S. Berahas, Jorge Nocedal, Martin Takáč
2016 Conference Paper NeurIPS

Abstract

The question of how to parallelize the stochastic gradient descent (SGD) method has received much attention in the literature. In this paper, we focus instead on batch methods that use a sizeable fraction of the training set at each iteration to facilitate parallelism, and that employ second-order information. In order to improve the learning process, we follow a multi-batch approach in which the batch changes at each iteration. This inherently gives the algorithm a stochastic flavor that can cause instability in L-BFGS, a popular batch method in machine learning. These difficulties arise because L-BFGS employs gradient differences to update the Hessian approximations; when these gradients are computed using different data points the process can be unstable. This paper shows how to perform stable quasi-Newton updating in the multi-batch setting, illustrates the behavior of the algorithm in a distributed computing platform, and studies its convergence properties for both the convex and nonconvex cases.

[20] On optimal probabilities in stochastic coordinate descent methods (code: `NSync)

Peter Richtárik, Martin Takáč
2016 Journal Paper Optimization Letters, 10(6), 1233-1243

Abstract

We propose and analyze a new parallel coordinate descent method—NSync—in which at each iteration a random subset of coordinates is updated, in parallel, allowing for the subsets to be chosen using an arbitrary probability law. This is the first method of this type. We derive convergence rates under a strong convexity assumption, and comment on how to assign probabilities to the sets to optimize the bound. The complexity and practical performance of the method can outperform its uniform variant by an order of magnitude. Surprisingly, the strategy of updating a single randomly selected coordinate per iteration—with optimal probabilities—may require less iterations, both in theory and practice, than the strategy of updating all coordinates at every iteration.

[19] Matrix Completion under Interval Uncertainty

Jakub Mareček, Peter Richtárik, Martin Takáč
2016 Journal Paper European Journal of Operational Research

Abstract

Matrix completion under interval uncertainty can be cast as a matrix completion problem with element-wise box constraints. We present an efficient alternating-direction parallel coordinate-descent method for the problem. We show that the method outperforms any other known method on a benchmark in image in-painting in terms of signal-to-noise ratio, and that it provides high-quality solutions for an instance of collaborative filtering with 100,198,805 recommendations within 5 minutes on a single personal computer.

[18] SDNA: Stochastic Dual Newton Ascent for Empirical Risk Minimization

Zheng Qu, Peter Richtárik, Martin Takáč, Olivier Fercoq
2016 Conference Paper ICML 2016 (33rd International Conference on Machine Learning)

Abstract

We propose a new algorithm for minimizing regularized empirical loss: Stochastic Dual Newton Ascent (SDNA). Our method is dual in nature: in each iteration we update a random subset of the dual variables. However, unlike existing methods such as stochastic dual coordinate ascent, SDNA is capable of utilizing all curvature information contained in the examples, which leads to striking improvements in both theory and practice - sometimes by orders of magnitude. In the special case when an L2-regularizer is used in the primal, the dual problem is a concave quadratic maximization problem plus a separable term. In this regime, SDNA in each step solves a proximal subproblem involving a random principal submatrix of the Hessian of the quadratic function; whence the name of the method. If, in addition, the loss functions are quadratic, our method can be interpreted as a novel variant of the recently introduced Iterative Hessian Sketch.

[17] Primal-Dual Rates and Certificates

Celestine Dunner, Simone Forte, Martin Takáč, Martin Jaggi
2016 Conference Paper ICML 2016 (33rd International Conference on Machine Learning)

Abstract

We propose an algorithm-independent framework to equip existing optimization methods with primal-dual certificates. Such certificates and corresponding rate of convergence guarantees are important for practitioners to diagnose progress, in particular in machine learning applications. We obtain new primal-dual convergence rates e.g. for the Lasso as well as many L1, Elastic-Net and group-lasso-regularized problems. The theory applies to any norm-regularized generalized linear model. Our approach provides efficiently computable duality gaps which are globally defined, without modifying the original problems in the region of interest.

[16] Distributed coordinate descent method for learning with big data (code: Hydra)

Peter Richtárik, Martin Takáč
2016 Journal Paper Journal of Machine Learning Research

Abstract

In this paper we develop and analyze Hydra: HYbriD cooRdinAte descent method for solving loss minimization problems with big data. We initially partition the coordinates (features) and assign each partition to a different node of a cluster. At every iteration, each node picks a random subset of the coordinates from those it owns, independently from the other computers, and in parallel computes and applies updates to the selected coordinates based on a simple closed-form formula. We give bounds on the number of iterations sufficient to approximately solve the problem with high probability, and show how it depends on the data and on the partitioning. We perform numerical experiments with a LASSO instance described by a 3TB matrix.

[15] Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting

Jakub Konečný, Jie Liu, Peter Richtárik, Martin Takáč
2016 Journal Paper IEEE Journal of Selected Topics in Signal Processing

Abstract

We propose mS2GD: a method incorporating a mini-batching scheme for improving the theoretical complexity and practical performance of semi-stochastic gradient descent (S2GD). We consider the problem of minimizing a strongly convex function represented as the sum of an average of a large number of smooth convex functions, and a simple nonsmooth convex regularizer. Our method first performs a deterministic step (computation of the gradient of the objective function at the starting point), followed by a large number of stochastic steps. The process is repeated a few times with the last iterate becoming the new starting point. The novelty of our method is in introduction of mini-batching into the computation of stochastic steps. In each step, instead of choosing a single function, we sample $b$ functions, compute their gradients, and compute the direction based on this. We analyze the complexity of the method and show that it benefits from two speedup effects. First, we prove that as long as $b$ is below a certain threshold, we can reach any predefined accuracy with less overall work than without mini-batching. Second, our mini-batching scheme admits a simple parallel implementation, and hence is suitable for further acceleration by parallelization.

[14] Partitioning Data on Features or Samples in Communication-Efficient Distributed Optimization?

Chenxin Ma, Martin Takáč
2015 Conference Paper OptML@NeurIPS 2015

Abstract

In this paper we study the effect of the way that the data is partitioned in distributed optimization. The original DiSCO algorithm [Communication-Efficient Distributed Optimization of Self-Concordant Empirical Loss, Yuchen Zhang and Lin Xiao, 2015] partitions the input data based on samples. We describe how the original algorithm has to be modified to allow partitioning on features and show its efficiency both in theory and also in practice.

[13] Dual Free SDCA for Empirical Risk Minimization with Adaptive Probabilities

Xi He, Martin Takáč
2015 Conference Paper OptML@NeurIPS 2015

Abstract

In this paper we develop dual free SDCA with adaptive probabilities for regularized empirical risk minimization. This extends recent work of Shai Shalev-Shwartz [SDCA without Duality, arXiv:1502.06177] to allow non-uniform selection of \"dual\" coordinate in SDCA. Moreover, the probability can change over time, making it more efficient than uniform selection. Our work focuses on generating adaptive probabilities through iterative process, preferring to choose coordinate with highest potential to decrease sub-optimality. We also propose a practical variant Algorithm adfSDCA+ which is more aggressive. The work is concluded with multiple experiments which shows efficiency of proposed algorithms.

[12] Adding vs. Averaging in Distributed Primal-Dual Optimization

Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtárik, Martin Takáč
2015 Conference Paper ICML 2015 (32nd International Conference on Machine Learning)

Abstract

Distributed optimization algorithms for large-scale machine learning suffer from a communication bottleneck. Reducing communication makes the efficient aggregation of partial work from different machines more challenging. In this paper we present a novel generalization of the recent communication efficient primal-dual coordinate ascent framework (CoCoA). Our framework, CoCoA+, allows for additive combination of local updates to the global parameters at each iteration, whereas previous schemes only allowed conservative averaging. We give stronger (primal-dual) convergence rate guarantees for both CoCoA as well as our new variants, and generalize the theory for both methods to also cover non-smooth convex loss functions. We provide an extensive experimental comparison on several real-world distributed datasets, showing markedly improved performance, especially when scaling up the number of machines.

[11] Parallel Coordinate Descent Methods for Big Data Optimization

Peter Richtárik, Martin Takáč
2015 Journal Paper Mathematical Programming

Abstract

In this work we show that randomized (block) coordinate descent methods can be accelerated by parallelization when applied to the problem of minimizing the sum of a smooth convex function and a simple separable convex function. The theoretical speedup, as compared to the serial method, and referring to the number of iterations needed to approximately solve the problem with high probability, is a simple expression depending on the number of parallel processors and a natural and easily computable measure of separability of the smooth component of the objective function. In the worst case, when no degree of separability is present, there may be no speedup; in the best case, when the problem is separable, the speedup is equal to the number of processors. Our analysis also works in the mode when the number of blocks being updated at each iteration is random, which allows for modeling situations with busy or unreliable processors. We show that our algorithm is able to solve a LASSO problem involving a matrix with 20 billion nonzeros in 2 hours on a large memory node with 24 cores.

[10] mS2GD: Mini-batch semi-stochastic gradient descent in the proximal setting (code: mS2GD)

Jakub Konečný, Jie Liu, Peter Richtárik, Martin Takáč
2014 Conference Paper OPT 2014: Optimization for Machine Learning @NeurIPS 2014

Abstract

We propose a mini-batching scheme for improving the theoretical complexity and practical performance of semi-stochastic gradient descent applied to the problem of minimizing a strongly convex composite function represented as the sum of an average of a large number of smooth convex functions, and simple nonsmooth convex function. Our method first performs a deterministic step (computation of the gradient of the objective function at the starting point), followed by a large number of stochastic steps. The process is repeated a few times with the last iterate becoming the new starting point. The novelty of our method is in introduction of mini-batching into the computation of stochastic steps. In each step, instead of choosing a single function, we sample b functions, compute their gradients, and compute the direction based on this. We analyze the complexity of the method and show that the method benefits from two speedup effects. First, we prove that as long as b is below a certain threshold, we can reach predefined accuracy with less overall work than without mini-batching. Second, our mini-batching scheme admits a simple parallel implementation, and hence is suitable for further acceleration by parallelization.

[9] Communication-Efficient Distributed Dual Coordinate Ascent

Martin Jaggi, Virginia Smith, Martin Takáč, Jonathan Terhorst, Thomas Hofmann, Michael I. Jordan
2014 Conference Paper NeurIPS

Abstract

Communication remains the most significant bottleneck in the performance of distributed optimization algorithms for large-scale machine learning. In this paper, we propose a communication-efficient framework, CoCoA, that uses local computation in a primal-dual setting to dramatically reduce the amount of necessary communication. We provide a strong convergence rate analysis for this class of algorithms, as well as experiments on real-world distributed datasets with implementations in Spark. In our experiments, we find that as compared to state-of-the-art mini-batch versions of SGD and SDCA algorithms, CoCoA converges to the same .001-accurate solution quality on average 25x as quickly.

[8] Randomized Coordinate Descent Methods for Big Data Optimization

Martin Takáč
2014 PhD thesis (PhD thesis), University of Edinburgh

[7] Distributed Block Coordinate Descent for Minimizing Partially Separable Functions

Jakub Mareček, Peter Richtárik, Martin Takáč
2014 Journal Paper Numerical Analysis and Optimization 2014, Springer Proceedings in Mathematics and Statistics

Abstract

In this work we propose a distributed randomized block coordinate descent method for minimizing a convex function with a huge number of variables/coordinates. We analyze its complexity under the assumption that the smooth part of the objective function is partially block separable, and show that the degree of separability directly influences the complexity. This extends the results in [22] to a distributed environment. We first show that partially block separable functions admit an expected separable overapproximation (ESO) with respect to a distributed sampling, compute the ESO parameters, and then specialize  complexity results from recent literature that hold under the generic ESO assumption. We describe several approaches to distribution and synchronization of the computation across a cluster of multi-core computer and provide promising computational results.

[6] Fast Distributed Coordinate Descent for Non-Strongly Convex Losses

Olivier Fercoq, Zheng Qu, Peter Richtárik, Martin Takáč
2014 Conference Paper MLSP2014: IEEE International Workshop on Machine Learning for Signal Processing

Abstract

We propose an efficient distributed randomized coordinate descent method for minimizing regularized non-strongly convex loss functions. The method attains the optimal convergence rate, where is the iteration counter. The core of the work is the theoretical study of stepsize parameters. We have implemented the method on Archer - the largest supercomputer in the UK - and show that the method is capable of solving a (synthetic) LASSO optimization problem with 50 billion variables.

[5] Mini-Batch Primal and Dual Methods for SVMs

Martin Takáč, Avleen Bijral, Peter Richtárik, Nathan Srebro
2013 Conference Paper ICML 2013 (30th International Conference on Machine Learning)

Abstract

We address the issue of using mini-batches in stochastic optimization of SVMs. We show that the same quantity, the , controls the parallelization speedup obtained for both primal stochastic subgradient descent (SGD) and stochastic dual coordinate ascent (SCDA) methods and use it to derive novel variants of mini-batched SDCA. Our guarantees for both methods are expressed in terms of the original nonsmooth primal problem based on the hinge-loss.

[4] Efficient serial and parallel coordinate descent methods for huge-scale truss topology design

Peter Richtárik, Martin Takáč
2011 Conference Paper Operations Research Proceedings 2011, pp. 27-32, Springer-Verlag 2012

Abstract

In this work we propose solving huge-scale instances of the truss topology design problem with coordinate descent methods. We develop four efficient codes: serial and parallel implementations of randomized and greedy rules for the selection of the variable (potential bar) to be updated in the next iteration. Both serial methods enjoy an O(n/k) iteration complexity guarantee, where n is the number of potential bars and k the iteration counter. Our parallel implementations, written in CUDA and running on a graphical processing unit (GPU), are capable of speedups of up to two orders of magnitude when compared to their serial counterparts. Numerical experiments were performed on instances with up to 30 million potential bars.

[3] Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function

Peter Richtárik, Martin Takáč
2011 Journal Paper Mathematical Programming, Series A, 38 pages, 2012

Abstract

In this paper we develop a randomized block-coordinate descent method for minimizing the sum of a smooth and a simple nonsmooth block-separable convex function and prove that it obtains an $\epsilon$-accurate solution with probability at least $1- ho$ in at most \$O( frac{n}{\epsilon} \log frac{1}{ ho})$ iterations, where \$n\$ is the number of blocks. For strongly convex functions the method converges linearly. This extends recent results of Nesterov , which cover the smooth case, to composite minimization, while at the same time improving the complexity by the factor of 4 and removing $\epsilon$ from the logarithmic term. More importantly, in contrast with the aforementioned work in which the author achieves the results by applying the method to a regularized version of the objective function with an unknown scaling factor, we show that this is not necessary, thus achieving true iteration complexity bounds. In the smooth case we also allow for arbitrary probability vectors and non-Euclidean norms. Finally, we demonstrate numerically that the algorithm is able to solve huge-scale $\ell_1$-regularized least squares and support vector machine problems with a billion variables.

[2] Efficiency of randomized coordinate descent methods on minimization problems with a composite objective function

Peter Richtárik, Martin Takáč
2011 Conference Paper Proceedings of SPARS11 (4th Workshop on Signal Processing with Adaptive Sparse Structured Representations), June 27-30, 2011

[1] Sensitivity analysis of the early exercise boundary for American style of Asian options

Daniel Ševcovič, Martin Takáč
2011 Journal Paper International Journal of Numerical Analysis and Modeling, Ser. B, 2(2-3), 2011 231-247

Abstract

In this paper we analyze American style of floating strike Asian call options belonging to the class of financial derivatives whose payoff diagram depends not only on the underlying asset price but also on the path average of underlying asset prices over some predetermined time interval. The mathematical model for the option price leads to a free boundary problem for a parabolic partial differential equation. Applying fixed domain transformation and transformation of variables we develop an efficient numerical algorithm based on a solution to a non-local parabolic partial differential equation for the transformed variable representing the synthesized portfolio. For various types of averaging methods we investigate the dependence of the early exercise boundary on model parameters.