1 Introduction
The use of machine learning to empower automated data analytics has been gaining growing popularity in various applications domains, such as medical diagnosis [1], [2], [3], credit risk assessment [4], face recognition [5], and more. With a well-trained model in hand, machine learning inference enables automatic prediction for new inputs. With the prevalent adoption of cloud computing, outsourcing such inference services to the cloud is becoming increasingly popular [6], [7], due to the well-understood benefits of cloud computing [8]. Such practice, however, raises critical privacy concerns. First, the provider's model is often proprietary as training an effective model demands a significant investment of datasets, resources, and specialized skills. The provider would be naturally not willing to expose the model in cleartext to the cloud. Second, the client's data as input to the model could also be sensitive, like medical data or financial data. Directly providing the inputs in cleartext may easily violate the client's privacy. Therefore, it is essential to embed security in the inference outsourcing design from the very beginning so that the privacy of the proprietary model and the sensitive client data can be assured.