HAWPv2: Learning Wireframes via Fully-Supervised Learning

June 29, 2023 ยท View on GitHub

The codes of HAWPv2 are placed in the directory of hawp/fsl.

Quickstart & Evaluation

  • Please download the dataset and checkpoints as in readme.md.

  • Run the following command line(s) to evaluate the offical model on the Wireframe dataset and YorkUrban dataset by

    Evaluation on the Wireframe dataset.
    python -m hawp.fsl.benchmark configs/hawpv2.yaml \
      --ckpt checkpoints/hawpv2-edb9b23f.pth \
      --dataset wireframe
    
    Evaluation on the YorkUrban dataset.
    python -m hawp.fsl.benchmark configs/hawpv2.yaml \
      --ckpt checkpoints/hawpv2-edb9b23f.pth \
      --dataset wireframe
    

Evaluation Results

DatasetsAP-5sAP-10sAP-15command linecomment
Wireframe65.869.871.4python -m hawp.fsl.benchmark configs/hawpv2.yaml --ckpt checkpoints/hawpv2-edb9b23f.pth --dataset wireframe --jhm=0.001jhm = 0.001
Wireframe65.769.871.4python -m hawp.fsl.benchmark configs/hawpv2.yaml --ckpt checkpoints/hawpv2-edb9b23f.pth --dataset wireframe --jhm=0.005jhm = 0.005
Wireframe65.769.771.3python -m hawp.fsl.benchmark configs/hawpv2.yaml --ckpt checkpoints/hawpv2-edb9b23f.pth --dataset wireframe --jhm=0.008jhm = 0.008 (default setting)
YorkUrban29.031.432.8python -m hawp.fsl.benchmark configs/hawpv2.yaml --ckpt checkpoints/hawpv2-edb9b23f.pth --dataset york --jhm=0.005jhm = 0.001
YorkUrban28.931.432.7python -m hawp.fsl.benchmark configs/hawpv2.yaml --ckpt checkpoints/hawpv2-edb9b23f.pth --dataset york --jhm=0.005jhm = 0.005
YorkUrban28.831.332.6python -m hawp.fsl.benchmark configs/hawpv2.yaml --ckpt checkpoints/hawpv2-edb9b23f.pth --dataset york --jhm=0.005jhm = 0.008

Training

  • Run the following command line to train the HAWPv2 on the Wireframe dataset.

    python -m hawp.fsl.train configs/hawpv2.yaml --logdir outputs
    
  • The usage of hawp.fsl.train is as follow:

    HAWPv2 Training
    
    positional arguments:
      config              path to config file
    
    optional arguments:
      -h, --help          show this help message and exit
      --logdir LOGDIR
      --resume RESUME
      --clean
      --seed SEED
      --tf32              toggle on the TF32 of pytorch
      --dtm {True,False}  toggle the deterministic option of CUDNN. This option will affect the replication of experiments