version 1.5

This commit is contained in:
yadonglu
2024-11-26 13:04:51 -08:00
parent 7021ad8917
commit 856b539e54
3 changed files with 2 additions and 5 deletions

View File

@@ -12,7 +12,7 @@
**OmniParser** is a comprehensive method for parsing user interface screenshots into structured and easy-to-understand elements, which significantly enhances the ability of GPT-4V to generate actions that can be accurately grounded in the corresponding regions of the interface.
## News
- [2024/11] OmniParser V1.5 is out! It features: 1) new icon detection model trained on cleaned data, and 2) improved bbox merging logic.
- [2024/11/26] We release an updated version, OmniParser V1.5 which features more fine grained/small icon detection. Examples in the demo.ipynb.
- [2024/10] OmniParser was the #1 trending model on huggingface model hub (starting 10/29/2024).
- [2024/10] Feel free to checkout our demo on [huggingface space](https://huggingface.co/spaces/microsoft/OmniParser)! (stay tuned for OmniParser + Claude Computer Use)
- [2024/10] Both Interactive Region Detection Model and Icon functional description model are released! [Hugginface models](https://huggingface.co/microsoft/OmniParser)

View File

@@ -20,9 +20,6 @@
"from PIL import Image\n",
"device = 'cuda'\n",
"\n",
"# som_model = get_yolo_model(model_path='weights/icon_detect/best.pt')\n",
"# som_model = get_yolo_model(model_path='/home/yadonglu/sandbox/data/yolo/runs/detect/yolo11l_som_detection_seq_10ep_b32_filter5more4/weights/best.pt')\n",
"# som_model = get_yolo_model('/home/yadonglu/sandbox/data/yolo/runs/detect/yolo11l_som_detection_seq_10ep_b24_filter5more1280/weights/best.pt')\n",
"som_model = get_yolo_model(model_path='weights/icon_detect_v1_5/best.pt')\n",
"\n",
"som_model.to(device)\n",

View File

@@ -16,7 +16,7 @@ if args.version == 'v1':
torch.save({'model':model}, 'weights/icon_detect/best.pt')
elif args.version == 'v1_5':
print("Converting v1_5")
tensor_dict = load_file("weights/icon_detect_v1_5/model.safetensors")
tensor_dict = torch.load("weights/icon_detect_v1_5/model.safetensors")
model = DetectionModel('weights/icon_detect_v1_5/model.yaml')
model.load_state_dict(tensor_dict)
save_dict = {'model':model}