This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# This Azure Pipeline validates and deploys bundle config (ML resource config and more) | |
# defined under my_mlops_stacks/resources/* | |
# and my_mlops_stacks/databricks.yml. | |
# The bundle is validated (CI) upon making a PR against the main_mlops_stack branch. | |
# Bundle resources defined for staging are deployed when a PR is merged into the main_mlops_stack branch. | |
# Bundle resources defined for prod are deployed when a PR is merged into the release_mlops_stack branch. | |
trigger: | |
branches: | |
include: |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
my_mlops_stacks <- Root directory. Both monorepo and polyrepo are supported. | |
│ | |
├── my_mlops_stacks <- Contains python code, notebooks and ML resources related to one ML project. | |
│ │ | |
│ ├── requirements.txt <- Specifies Python dependencies for ML code (for example: model training, batch inference). | |
│ │ | |
│ ├── databricks.yml <- databricks.yml is the root bundle file for the ML project that can be loaded by databricks CLI bundles. It defines the bundle name, workspace URL and resource config component to be included. | |
│ │ | |
│ ├── training <- Training folder contains Notebook that trains and registers the model with feature store support. | |
│ │ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Databricks notebook source | |
# MAGIC %load_ext autoreload | |
# MAGIC %autoreload 2 | |
# COMMAND ---------- | |
import os | |
notebook_path = '/Workspace/' + os.path.dirname(dbutils.notebook.entry_point.getDbutils().notebook().getContext().notebookPath().get()) | |
%cd $notebook_path |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Databricks notebook source | |
import os | |
import sys | |
import requests | |
import json | |
notebook_path = '/Workspace/' + os.path.dirname(dbutils.notebook.entry_point.getDbutils().notebook().getContext().notebookPath().get()) | |
# COMMAND ---------- | |
# Set the name of the MLflow endpoint |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Log the trained model with MLflow and package it with feature lookup information. | |
# fs.log_model( | |
# model, | |
# artifact_path="model_packaged", | |
# flavor=mlflow.lightgbm, | |
# training_set=training_set, | |
# registered_model_name=model_name, | |
# ) | |
# Log the trained model with MLflow. | |
autolog_run = mlflow.last_active_run() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# The name of the bundle. run `databricks bundle schema` to see the full bundle settings schema. | |
bundle: | |
name: my_mlops_stacks | |
variables: | |
experiment_name: | |
description: Experiment name for the model training. | |
default: /Users/${workspace.current_user.userName}/${bundle.target}-my_mlops_stacks-experiment | |
model_name: | |
description: Model name for the model training. |