update readme
This commit is contained in:
@@ -12,7 +12,7 @@
|
||||
**OmniParser** is a comprehensive method for parsing user interface screenshots into structured and easy-to-understand elements, which significantly enhances the ability of GPT-4V to generate actions that can be accurately grounded in the corresponding regions of the interface.
|
||||
|
||||
## News
|
||||
- [2024/11/26] We release an updated version, OmniParser V1.5 which features more fine grained/small icon detection. Examples in the demo.ipynb.
|
||||
- [2024/11/26] We release an updated version, OmniParser V1.5 which features 1) more fine grained/small icon detection, 2) prediction of whether each screen element is interactable or not. Examples in the demo.ipynb.
|
||||
- [2024/10] OmniParser was the #1 trending model on huggingface model hub (starting 10/29/2024).
|
||||
- [2024/10] Feel free to checkout our demo on [huggingface space](https://huggingface.co/spaces/microsoft/OmniParser)! (stay tuned for OmniParser + Claude Computer Use)
|
||||
- [2024/10] Both Interactive Region Detection Model and Icon functional description model are released! [Hugginface models](https://huggingface.co/microsoft/OmniParser)
|
||||
|
||||
Reference in New Issue
Block a user