Zenpli
Zenpli, an innovative company specializing in fraud detection, collaborated with Semantiks to optimize its deployment and model management capabilities. In order to improve its fraud detection model, Zenpli sought to evaluate and implement scalable, automated and efficient MLOps pipelines. To achieve this, Semantiks guided Zenpli in evaluating three Google Cloud Platform (GCP) services: AutoML, BigQuery ML and Vertex AI, developing and implementing two comprehensive work phases.
%2010.39.02.png)
Zenpli's fraud detection model, hosted on AWS, faced scalability issues and lacked efficient model management processes. As the company grew, they needed a stronger solution to manage the training, deployment and maintenance of the model. Key challenges included:
- Ensure scalability to handle growing volumes of data.
- Implement automated workflows for training and deployment.
- Evaluate the best MLOps platform among several options, aligned with your long-term operational objectives.
Zenpli's primary objective was to evaluate the capabilities of AutoML, BigQuery ML and Vertex AI to select the most appropriate platform to support their growth.
In this phase, Semantiks worked closely with Zenpli to replicate its fraud detection model using GCP services. The main objectives were:
- Develop and train machine learning models with AutoML, BigQuery ML, and Vertex AI.
- Evaluate each platform's ability to handle large scale data sets and provide accurate predictions.
- Ensure that the models met technical and business requirements for fraud detection.
Phase 2: MLOps
Building on the ML development phase, Phase 2 focused on deploying and managing the models. Semantiks implemented three independent MLOps pipelines designed for each platform, prioritizing:
- Automated workflows for training, deploying and monitoring model performance.
- Scalability and seamless integration into existing Zenpli operations.
- Version control and continuous monitoring to address the model's drift and maintain its performance.
Semantiks delivered three custom MLOps pipelines, each taking advantage of a different GCP service:
- AutoML pipeline: It automated experimentation, training and deployment processes. It included advanced capabilities such as automatic feature selection, hyperparameter adjustment, and periodic retraining to ensure the continued effectiveness of the model.
- BigQuery ML pipeline: Designed to integrate seamlessly into the BigQuery environment, this pipeline optimized data preprocessing, feature engineering, and scheduled model updates directly to the data warehouse.
- Vertex AI Pipeline: This pipeline prioritized robust model management, incorporating CI/CD capabilities for automated deployment, version control, and model drift monitoring.
Each pipeline was rigorously tested, ensuring that it met Zenpli's requirements in terms of scalability, performance and ease of use, allowing them to make a decision based on data.
At the end of Phase 2, Zenpli was able to successfully evaluate and test all three pipelines, obtaining:
- A detailed understanding of the strengths and limitations of AutoML, BigQuery ML, and Vertex AI.
- Comprehensive documentation and training to facilitate an informed decision.
- Increased confidence to scale and manage your fraud detection model effectively.
Zenpli is in an ideal position to select the optimal pipeline according to your needs, with plans to implement and scale the chosen solution in the near future. Semantiks will continue to support Zenpli in refining its MLOps processes and ensuring success in deploying its fraud detection model.
The collaboration between Zenpli and Semantiks highlights the value of customized MLOps solutions for transforming machine learning workflows. By allowing Zenpli to evaluate AutoML, BigQuery ML and Vertex AI, Semantiks helped them build a scalable and efficient model management process. This collaboration underscores Semantiks' commitment to delivering innovative solutions that drive the success of visionary companies like Zenpli.