version 1.5
This commit is contained in:
@@ -7,12 +7,13 @@
|
||||
[](https://arxiv.org/abs/2408.00203)
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
|
||||
📢 [[Project Page](https://microsoft.github.io/OmniParser/)] [[Blog Post](https://www.microsoft.com/en-us/research/articles/omniparser-for-pure-vision-based-gui-agent/)] [[Models](https://huggingface.co/microsoft/OmniParser)] [[Huggingface demo](https://huggingface.co/spaces/microsoft/OmniParser)]
|
||||
📢 [[Project Page](https://microsoft.github.io/OmniParser/)] [[Blog Post](https://www.microsoft.com/en-us/research/articles/omniparser-for-pure-vision-based-gui-agent/)] [[Models](https://huggingface.co/microsoft/OmniParser)] [huggingface space](https://huggingface.co/spaces/microsoft/OmniParser)
|
||||
|
||||
**OmniParser** is a comprehensive method for parsing user interface screenshots into structured and easy-to-understand elements, which significantly enhances the ability of GPT-4V to generate actions that can be accurately grounded in the corresponding regions of the interface.
|
||||
|
||||
## News
|
||||
- [2024/10] OmniParser is the #1 trending model on huggingface model hub (starting 10/29/2024).
|
||||
- [2024/11] OmniParser V1.5 is out! It features: 1) new icon detection model trained on cleaned data, and 2) improved bbox merging logic.
|
||||
- [2024/10] OmniParser was the #1 trending model on huggingface model hub (starting 10/29/2024).
|
||||
- [2024/10] Feel free to checkout our demo on [huggingface space](https://huggingface.co/spaces/microsoft/OmniParser)! (stay tuned for OmniParser + Claude Computer Use)
|
||||
- [2024/10] Both Interactive Region Detection Model and Icon functional description model are released! [Hugginface models](https://huggingface.co/microsoft/OmniParser)
|
||||
- [2024/09] OmniParser achieves the best performance on [Windows Agent Arena](https://microsoft.github.io/WindowsAgentArena/)!
|
||||
|
||||
Reference in New Issue
Block a user