Many problems in statistical learning and neural computation involve linear optimizations with convex domain constraints. In this talk, we discuss a large set of optimization problems in quadratic programming where the optimization is confined to any closed convex domain in Rp, with p ≥ 1. We extend Sha et al.'s multiplicative updates for quadratic programming with nonnegative constraint to arbitrary closed convex domain constraint. As an advantage to other multiplicative updates methods used in the machine learning literature, our algorithm provides solutions to linear regularizations with any convex penalty functions. Moreover, our algorithm can be easily hard-coded in any languages, such as Python, R and M AT LAB. Examples on application include ridge, lasso, elastic net and Lp (p ≥ 1) penalties. Simulation study result shows the consistency and simplicity of our algorithms.