diff --git a/example/continuous_text_classification_dev/README.md b/example/continuous_text_classification_dev/README.md index 95f22a2..e3c1c03 100644 --- a/example/continuous_text_classification_dev/README.md +++ b/example/continuous_text_classification_dev/README.md @@ -187,7 +187,7 @@ sentiment_analysis_pipeline.run() ``` Alternatively, we can let DataCI automatically trigger the pipeline run upon a new dataset is published, -please refer to the [DataCI Trigger Tutorial]() (WIP). +please refer to the [DataCI Trigger Tutorial](/example/ci) (WIP). Go to [pipeline runs dashboard](http://localhost:8080/taskinstance/list/?_flt_3_dag_id=default--sentiment_analysis--v1) to check the pipeline run result. diff --git a/example/data_centric_benchmark/README.md b/example/data_centric_benchmark/README.md index e490d98..a608257 100644 --- a/example/data_centric_benchmark/README.md +++ b/example/data_centric_benchmark/README.md @@ -3,7 +3,7 @@ pipelines. In this tutorial, we will show how to use DataCI to benchmark the dat Data is the most important part of the machine learning pipeline. Data scientists spend most of their time cleaning, augmenting, and preprocessing data, only to find the best online performance with the same model structure. -[In the previous tutorial](/example/create_text_classification_dataset), we built 4 versions of the text classification +[In the previous tutorial](/example/continuous_text_classification_dev), we built 4 versions of the text classification dataset `train_data_pipeline:text_aug`. We are now going to determine which dataset performs the best. # 0. Prerequisites