MLOps Engineer

Lisa Wang

Building automated ML pipelines that deploy models to production with confidence.

Lisa Wang - MLOps Engineer

Education

M.S. in Data Science from Carnegie Mellon University B.S. in Computer Science from Georgia Tech

Date

4 years as MLOps Engineer at Databricks 3 years as ML Infrastructure Engineer at Airbnb

I’m Lisa Wang, MLOps Engineer at Kurai. I specialize in building the infrastructure and pipelines that take ML models from experiment to production. My experience at Databricks and Airbnb has taught me that great ML models are useless without great MLOps.

From Notebook to Production

At Airbnb, I built the ML platform that deployed 100+ models to production, serving real-time predictions for pricing and recommendations. At Databricks, I worked on MLflow and enterprise MLOps solutions for Fortune 500 companies.

I’ve seen too many ML projects fail because they couldn’t bridge the gap between Jupyter notebooks and production systems. I fix that.

My Expertise

ML Infrastructure:

  • MLflow experiment tracking and model registry
  • Kubeflow pipelines for orchestrating ML workflows
  • Kubernetes-based model serving (Seldon, KServe)
  • Feature stores (Feast, Tecton)

CI/CD for ML:

  • Automated training pipelines (GitHub Actions, Airflow)
  • Model testing and validation before deployment
  • Canary deployments and A/B testing
  • Automated rollback on model degradation

Monitoring & Observability:

  • Model performance monitoring (accuracy, precision, recall)
  • Data drift detection (Evidently, Arize)
  • Prediction latency and throughput tracking
  • Alerting on anomalies and failures

Model Deployment:

  • Batch inference (AWS Lambda, Glue)
  • Real-time API serving (FastAPI, TensorFlow Serving)
  • Edge deployment (ONNX, TensorRT)
  • Multi-cloud strategies (AWS SageMaker, GCP Vertex AI)

Production ML Systems

Recent MLOps projects I’ve delivered:

  • RAG pipeline: Automated deployment pipeline for LLM-powered document search
  • Recommendation engine: Real-time model serving with 50ms p95 latency
  • Fraud detection: Auto-retraining pipeline adapting to new fraud patterns
  • Forecasting system: Deploying 500+ models weekly for different product categories

Automation First

I believe in automation over manual processes. Every ML pipeline I build:

  • Trains automatically when new data arrives
  • Tests thoroughly before deploying to production
  • Monitors continuously for performance degradation
  • Retrains intelligently when models drift
  • Rolls back safely if issues are detected

Best Practices

MLOps isn’t just about tools—it’s about processes:

  • Version everything: Code, data, models, and hyperparameters
  • Test everywhere: Unit tests, integration tests, model validation
  • Document continuously: Model cards, runbooks, architecture docs
  • Measure obsessively: Metrics, logs, and traces for everything

Let’s Operationalize

Whether you’re starting your MLOps journey or scaling existing ML infrastructure, I can help. Reach out at lisa@kurai.dev.

Certifications:

  • AWS Certified Machine Learning Specialty
  • Google Professional ML Engineer
  • Certified Kubernetes Administrator (CKA)

Build the Future of AI

Join a team of innovators shaping the intelligent systems landscape. Work on cutting-edge AI infrastructure, LLM applications, and scalable backends that power the next generation of software.