diff --git a/docs/Makefile b/docs/Makefile
deleted file mode 100644
index d0c3cbf..0000000
--- a/docs/Makefile
+++ /dev/null
@@ -1,20 +0,0 @@
-# Minimal makefile for Sphinx documentation
-#
-
-# You can set these variables from the command line, and also
-# from the environment for the first two.
-SPHINXOPTS ?=
-SPHINXBUILD ?= sphinx-build
-SOURCEDIR = source
-BUILDDIR = build
-
-# Put it first so that "make" without argument is like "make help".
-help:
- @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
-
-.PHONY: help Makefile
-
-# Catch-all target: route all unknown targets to Sphinx using the new
-# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
-%: Makefile
- @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
diff --git "a/docs/\\" "b/docs/\\"
deleted file mode 100644
index dbf51cb..0000000
--- "a/docs/\\"
+++ /dev/null
@@ -1,39 +0,0 @@
-Overview
-========
-
-Design Principles
-~~~~~~~~~~~~~~~~~
-
-TODS follows the design principal of `D3M `_.
-The toolkit wraps each function into ``Primitive`` class with unified
-interfaces. The goal of this toolkit is to enable the users to easily develop
-outlier detection system for time series data. The following design principles
-are applied when developing the toolkit:
- * **.** Results on the environments can be reproduced. The same result should be obtained with the same random seed in different runs.
- * **Accessible.** The experiences are collected and well organized after each game with easy-to-use interfaces. Uses can conveniently configure state representation, action encoding, reward design, or even the game rules.
- * **Scalable.** New card environments can be added conveniently into the toolkit with the above design principles. We also try to minimize the dependencies in the toolkit so that the codes can be easily maintained.
-
-TODS High-level Design
-~~~~~~~~~~~~~~~~~~~~~~~~
-
-
-.. image:: img/framework.pdf
- :width: 800
-
-
-
-Data Processing
----------------
-
-
-Timeseries Processing
----------------------
-
-Feature Analysis
-----------------
-
-Detection Algorithms
----------------------
-
-Reincforcement
---------------
diff --git a/docs/source/conf.py b/docs/conf.py
similarity index 100%
rename from docs/source/conf.py
rename to docs/conf.py
diff --git a/docs/source/doctree.rst b/docs/doctree.rst
similarity index 100%
rename from docs/source/doctree.rst
rename to docs/doctree.rst
diff --git a/docs/source/getting_started.rst b/docs/getting_started.rst
similarity index 100%
rename from docs/source/getting_started.rst
rename to docs/getting_started.rst
diff --git a/docs/source/img/tods_framework.pdf b/docs/img/tods_framework.pdf
similarity index 100%
rename from docs/source/img/tods_framework.pdf
rename to docs/img/tods_framework.pdf
diff --git a/docs/index.rst b/docs/index.rst
new file mode 100644
index 0000000..8a0d452
--- /dev/null
+++ b/docs/index.rst
@@ -0,0 +1,100 @@
+.. Time Series Outlier Detection System documentation master file, created by
+ sphinx-quickstart on Wed Sep 9 22:52:15 2020.
+ You can adapt this file completely to your liking, but it should at least
+ contain the root `toctree` directive.
+
+Welcome to TOD's documentation!
+================================================================
+TODS is a full-stack automated machine learning system for outlier detection on multivariate time-series data. TODS provides exahaustive modules for building machine learning-based outlier detection systems including: data processing, time series processing, feature analysis (extraction), detection algorithms, and reinforcement module. The functionalities provided via these modules including: data preprocessing for general purposes, time series data smoothing/transformation, extracting features from time/frequency domains, various detection algorithms, and involving human expertises to calibrate the system. Three common outlier detection scenarios on time-series data can be performed: point-wise detection (time points as outliers), pattern-wise detection (subsequences as outliers), and system-wise detection (sets of time series as outliers), and wide-range of corresponding algorithms are provided in TODS. This package is developed by `DATA Lab @ Texas A&M University `__.
+
+TODS is featured for:
+
+* **Full Sack Machine Learning System** which supports exhaustive components from preprocessings, feature extraction, detection algorithms and also human-in-the loop interface.
+
+* **Wide-range of Algorithms**, including all of the point-wise detection algorithms supported by `PyOD `__, state-of-the-art pattern-wise (collective) detection algorithms such as `DeepLog `__, `Telemanon `__, and also various ensemble algorithms for performing system-wise detection.
+
+* **Automated Machine Learning** aims on providing knowledge-free process that construct optimal pipeline based on the given data by automatically searching the best combination from all of the existing modules.
+
+Installation
+-----------
+This package works with **Python 3.6** and pip 19+. You need to have the following packages installed on the system (for Debian/Ubuntu):
+::
+ sudo apt-get install libssl-dev libcurl4-openssl-dev libyaml-dev build-essential libopenblas-dev libcap-dev ffmpeg
+
+Then execute ``python setup.py install``, the script will then install all of the packges to build up TODS.
+
+
+
+.. toctree::
+ :maxdepth: 4
+ :caption: Contents:
+
+Examples
+--------
+Examples are available in `examples `__. For basic usage, you can evaluate a pipeline on a given datasets. Here, we provide an example to load our default pipeline and evaluate it on a subset of yahoo dataset.
+
+.. code:: python
+
+ import pandas as pd
+
+ from tods import schemas as schemas_utils
+ from tods.utils import generate_dataset_problem, evaluate_pipeline
+
+ table_path = 'datasets/yahoo_sub_5.csv'
+ target_index = 6 # what column is the target
+ #table_path = 'datasets/NAB/realTweets/labeled_Twitter_volume_IBM.csv' # The path of the dataset
+ time_limit = 30 # How many seconds you wanna search
+ #metric = 'F1' # F1 on label 1
+ metric = 'F1_MACRO' # F1 on both label 0 and 1
+
+ # Read data and generate dataset and problem
+ df = pd.read_csv(table_path)
+ dataset, problem_description = generate_dataset_problem(df, target_index=target_index, metric=metric)
+
+ # Load the default pipeline
+ pipeline = schemas_utils.load_default_pipeline()
+
+ # Run the pipeline
+ pipeline_result = evaluate_pipeline(problem_description, dataset, pipeline)
+
+
+We also provide AutoML support to help you automatically find a good pipeline for a your data.
+
+
+.. code:: python
+
+ import pandas as pd
+
+ from axolotl.backend.simple import SimpleRunner
+
+ from tods.utils import generate_dataset_problem
+ from tods.search import BruteForceSearch
+
+ # Some information
+ #table_path = 'datasets/NAB/realTweets/labeled_Twitter_volume_GOOG.csv' # The path of the dataset
+ #target_index = 2 # what column is the target
+
+ table_path = 'datasets/yahoo_sub_5.csv'
+ target_index = 6 # what column is the target
+ #table_path = 'datasets/NAB/realTweets/labeled_Twitter_volume_IBM.csv' # The path of the dataset
+ time_limit = 30 # How many seconds you wanna search
+ #metric = 'F1' # F1 on label 1
+ metric = 'F1_MACRO' # F1 on both label 0 and 1
+
+ # Read data and generate dataset and problem
+ df = pd.read_csv(table_path)
+ dataset, problem_description = generate_dataset_problem(df, target_index=target_index, metric=metric)
+
+ # Start backend
+ backend = SimpleRunner(random_seed=0)
+
+ # Start search algorithm
+ search = BruteForceSearch(problem_description=problem_description, backend=backend)
+
+ # Find the best pipeline
+ best_runtime, best_pipeline_result = search.search_fit(input_data=[dataset], time_limit=time_limit)
+ best_pipeline = best_runtime.pipeline
+ best_output = best_pipeline_result.output
+
+ # Evaluate the best pipeline
+ best_scores = search.evaluate(best_pipeline).scores
diff --git a/docs/make.bat b/docs/make.bat
deleted file mode 100644
index 6247f7e..0000000
--- a/docs/make.bat
+++ /dev/null
@@ -1,35 +0,0 @@
-@ECHO OFF
-
-pushd %~dp0
-
-REM Command file for Sphinx documentation
-
-if "%SPHINXBUILD%" == "" (
- set SPHINXBUILD=sphinx-build
-)
-set SOURCEDIR=source
-set BUILDDIR=build
-
-if "%1" == "" goto help
-
-%SPHINXBUILD% >NUL 2>NUL
-if errorlevel 9009 (
- echo.
- echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
- echo.installed, then set the SPHINXBUILD environment variable to point
- echo.to the full path of the 'sphinx-build' executable. Alternatively you
- echo.may add the Sphinx directory to PATH.
- echo.
- echo.If you don't have Sphinx installed, grab it from
- echo.http://sphinx-doc.org/
- exit /b 1
-)
-
-%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
-goto end
-
-:help
-%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
-
-:end
-popd
diff --git a/docs/source/modules.rst b/docs/modules.rst
similarity index 100%
rename from docs/source/modules.rst
rename to docs/modules.rst
diff --git a/docs/source/overview.rst b/docs/overview.rst
similarity index 100%
rename from docs/source/overview.rst
rename to docs/overview.rst
diff --git a/docs/source/index.rst b/docs/source/index.rst
deleted file mode 100644
index d41d25a..0000000
--- a/docs/source/index.rst
+++ /dev/null
@@ -1,28 +0,0 @@
-.. Time Series Outlier Detection System documentation master file, created by
- sphinx-quickstart on Wed Sep 9 22:52:15 2020.
- You can adapt this file completely to your liking, but it should at least
- contain the root `toctree` directive.
-
-Welcome to TOD's documentation!
-================================================================
-
-.. toctree::
- :maxdepth: 4
- :caption: Contents:
-
-
-
-API Documents
-==================
-.. toctree::
- :maxdepth: 4
- :caption: API Documents:
- tods.data_processing
- tods.timeseries_processing
- tods.feature_analysis
- tods.detection_algorithm
- tods.reinforcement
-
-* :ref:`genindex`
-* :ref:`modindex`
-* :ref:`search`
diff --git a/docs/source/tods.data_processing.rst b/docs/tods.data_processing.rst
similarity index 100%
rename from docs/source/tods.data_processing.rst
rename to docs/tods.data_processing.rst
diff --git a/docs/source/tods.detection_algorithm.rst b/docs/tods.detection_algorithm.rst
similarity index 100%
rename from docs/source/tods.detection_algorithm.rst
rename to docs/tods.detection_algorithm.rst
diff --git a/docs/source/tods.feature_analysis.rst b/docs/tods.feature_analysis.rst
similarity index 100%
rename from docs/source/tods.feature_analysis.rst
rename to docs/tods.feature_analysis.rst
diff --git a/docs/source/tods.reinforcement.rst b/docs/tods.reinforcement.rst
similarity index 100%
rename from docs/source/tods.reinforcement.rst
rename to docs/tods.reinforcement.rst
diff --git a/docs/source/tods.rst b/docs/tods.rst
similarity index 100%
rename from docs/source/tods.rst
rename to docs/tods.rst
diff --git a/docs/source/tods.searcher.rst b/docs/tods.searcher.rst
similarity index 100%
rename from docs/source/tods.searcher.rst
rename to docs/tods.searcher.rst
diff --git a/docs/source/tods.searcher.search.rst b/docs/tods.searcher.search.rst
similarity index 100%
rename from docs/source/tods.searcher.search.rst
rename to docs/tods.searcher.search.rst
diff --git a/docs/source/tods.timeseries_processing.rst b/docs/tods.timeseries_processing.rst
similarity index 100%
rename from docs/source/tods.timeseries_processing.rst
rename to docs/tods.timeseries_processing.rst