Looding...

Venterra Realty

NeuralOps for Automated Training and Inference Pipelines

Real Estate | MLOps
case study

Business Impacts

70%

Improvement in time to release ( ~2 week)

Customer Key Facts

  • Country : USA
  • Industry : Real Estate

Problem Context

Venterra Realty is a premier real estate investment firm specializing in developing, financing, and managing multi-family residential communities in the southern United States.

They wanted to build an end-to-end machine learning operations management platform to design, develop and maintain ML models. To achieve this goal, they needed to create an automated training and inference pipeline that could be configured, monitored, and governed, requiring minimal manual intervention. Users would bring their own code for large-scale model training & generate inferences, and continuously monitor models in production.

Challenges

  • The Venterra Realty had a repetitive workload and was spending redundant efforts as they did not have robust AI/ML processes and needed to standardize them
  • Data science teams’ over-dependence on engineering and DevOps teams
  • Little knowledge of leveraging best practices while building ML solutions on SageMaker

Technology Used

Amazon Sagemaker

Amazon Sagemaker

Amazon S3

Amazon S3

AWS Lambda Function

AWS Lambda Function

Amazon EKS

Amazon EKS

AWS DynamoDB

AWS DynamoDB

AWS RDS

AWS RDS

Apache Airflow

Apache Airflow

NeuralOps

NeuralOps

Deployed Quantiphi's MLOPs platform - NeuralOps in the client's AWS Environment

Solution

Venterra Realty's team deployed Quantiphi's MLOps platform - NeuralOps, in the client's AWS Environment, which combines DevOps practices and Trusted AI principles (Reliable, Ethical, and Lawful). This platform automated the training and inference pipelines in a fully configured environment, requiring minimal manual intervention. Quantiphi's solution allowed users to onboard new algorithms, processing steps, orchestration pipelines, etc., onto the platform.

The platform enabled continuous monitoring of production data and model performance measures, as well as automatic and continuous retraining of machine learning models to respond to changes in data distribution.

Best practices around build–test–release cycles, segregation in test and production environments, modularity, etc., were baked into the lifecycle of developing ML solutions.

Result

  • Set up governance mechanism that ensures best machine learning practices and reusability
  • Improved infrastructure utilization through SageMaker jobs
  • Reduced manual efforts for infra set-up and configuration

 

Thank you for reaching out to us!

Our experts will be in touch with you shortly.

In the meantime, explore our insightful blogs and case studies.

Something went wrong!

Please try it again.

Share