Advanced Optimization: Theory and Applications
			Course Outcomes
			
			-  Learn additional theory needed from calculus and linear algebra for optimization.
 
			-  Learn to model various applications from data science as an optimization problem.
 
			-  Learn to prove convergence estimates and complexity of the algorithms.
 
			-  Learn to code optimization solvers efficiently using Python.
 
			-  Demonstrate expertise in applying optimization methods in research problems.
 
			
			This course teaches numerical optimization techniques to UG and PG students.
			
			 -  Unit 1:  Review of convexity, duality, and classical theory and algorithms for convex optimization (6 hours) 
 
			 -  Unit 2:  Nonlinear and non-smooth optimization, projected gradient methods, accelerated gradient methods, sub-gradient projection methods, adaptive methods, second order methods, dual methods, solvers for min-max, alternating minimization, EM algorithm, convergence estimates (12 hours) 
 
			 -  Unit 3:  Applications of advanced optimization:  sparse recovery, low rank matrix recovery, recommender systems, extreme classification, generative adversarial methods (6 hours) 
  
			
		        References:
			
			-  Stephen Boyd and Lieven Vandenberghe, Convex Optimization, Cambridge University Press, 2004.
 
			-  Ian Goodfellow, Yoshua Bengio and Aaron Courville, Deep Learning, MIT Press, 2016.
 
			-  Prateek Jain and Purushottam Kar, Non-convex Optimization for Machine Learning, 2017, arXiv.
 
			-  W. Hu, Nonlinear Optimization in Machine Learning.
 
			
			Weightages:
			
			-  Assignments in theory: 15 marks, Mid Semester Examination: 25 marks, End Semester Examination: 30 marks, Assessment of four projects:  30 marks