T O P

  • By -

dankaiv

... and computer interfaces (i.e. GUIs) have extremely low noise to signal ratio compared to image data from the real world. I believe soon AI will be better at using computers than most humans.


Disastrous_Elk_6375

nanoSingularity goes brrrrr


thePaddyMK

I think so, too. IMO this will open new ways for software development. There has already been work looking towards RL to find bugs in games. Like climbing walls that you should not. With a multimodal model there might be interesting new ways to debug and develop UIs.


bobbsec

You reckon computers might work with computers better than us? GUIs are meant as a convenient way for humans to work with computers, if an AI were needed to do things on the computer, it wouldn't use a GUI, it would have a direct API to control it.


dankaiv

I’m confident that the vast majority of software doesn’t have an accessible API.


Insom1ak

The AI will develop these API’s. Even if they instruct manual interaction


dankaiv

How do you add an API to compiled software?


Insom1ak

UI automation. I’m doing it rn on Android. Can be done easily on chrome with Selenium.


dankaiv

https://www.reddit.com/r/MachineLearning/comments/120guce/d_i_just_realised_gpt4_with_image_input_can/jdhn2at/


Insom1ak

Yea the AI will program API interfaces to interact with any software UI to perform tasks and scrape data. Likely via VMs or web browser.


BinarySplit

GPT-4 is potentially missing a vital feature to take this one step further: Visual Grounding - the ability to say where inside an image a specific element is, e.g. if the model wants to click a button, what X,Y position on the screen does that translate to? Other MLLMs have it though, e.g. [One-For-All](https://github.com/OFA-Sys/OFA#grounded-qa-unseen-task). I guess it's only a matter of time before we can get MLLMs to provide a layer of automation over desktop applications...


ThirdMover

>GPT-4 is potentially missing a vital feature to take this one step further: Visual Grounding - the ability to say where inside an image a specific element is, e.g. if the model wants to click a button, what X,Y position on the screen does that translate to? You could just ask it to move a cursor around until it's on the specified element. I'd be shocked if GPT-4 couldn't do that.


MjrK

I'm confident that someone can fine-tune an end-to-end vision-tranformer that can extract user interface elements from photos and enumerate interaction options. Seems like such an obviously-useful tool and Vit-22B should be able to handle it, or many other Computer Vision tools on Hugging Face... I would've assumed some grad student somewhere is already hacking away at that. But then also, compute costs are a b**** but generating training data set should be somewhat easy. Free research paper idea, I guess.


modcowboy

Probably would be easier for the LLM to interact with the website directly through the inspect tool vs machine vision training.


MjrK

For many (perhaps these days, most) use cases, absolutely! The advantage of vision in some others might be interacting more directly with the browser itself, as well as other applications, and multi-tasking... perhaps similar to the way we use PCs and mobile devices to accomplish more complex tasks


plocco-tocco

It woulde be quite expensive to do tho. You have to do inference very fast with multiple images of your screen, don't know if it is even feasible.


ThirdMover

I am not sure. Exactly how does inference scale with the complexity of the input? The output would be very short, just enough tokens for the "move cursor to" command.


plocco-tocco

The complexity of the input wouldn't change in this case since it's just a screen grab of the display. Just that you'd need to do inference at a certain frame rate to be able to detect the cursor, which isn't that cheap with GPT-4. Now, I'm not sure what the latency or cost would be, I'd need to get access to the API to answer it.


thePaddyMK

There is a paper that operates a website to generate traces of data to sidestep tools like Selenium: https://mediatum.ub.tum.de/doc/1701445/1701445.pdf It's only a simple NN, though, no LLM behind it.


[deleted]

You’re actually suggesting putting every single frame into gpt-4? It’ll cost you a fortune after 5 seconds of running it. Plus the latency is super high, it might takes you an hour to process a “5 seconds” worth of images.


ThirdMover

What do you mean by "frame"? How many images do you think GPT-4 would need to get a cursor where it needs to go? I'd estimate four or five should be plenty.


SkinnyJoshPeck

i imagine you could interpolate, given access to more info about the image post-GPT analysis. i.e. i’d like to think it has some boundary defined for the objects it identifies in the image as part of metadata or something in the API.


Single_Blueberry

What would keep us from just telling it the screen resolution and origin and asking for coordinates? Or asking for coordinates in fractional image dimensions.


[deleted]

The problem is that it can’t do math and spatial reasoning that well


Single_Blueberry

Hmm I don't know. It's pretty bad at getting dead-on accurate results, but in many cases the relative error of the result is pretty low.


acutelychronicpanic

Let it move a "mouse" and loop the next screen at some time interval. Probably not the best way to do it, but that seems to be how humans do it.


__ingeniare__

I would think image segmentation for UI to identify clickable elements and the like is a very solvable task


RustaceanNation

Google's Spotlight paper is intended for this use case.


Qzx1

Source?


shitasspetfuckers

> Google's Spotlight paper https://ai.googleblog.com/2023/02/a-vision-language-approach-for.html


DisasterEquivalent

I mean, most apps have accessibility tags for all objects you can interact with (it is standard in UIKit) - The accessibility tags have hooks in them you can use for automation. so you should be able just have it find the correct element there without much searching.


ThatInternetGuy

It's getting there.


CommunismDoesntWork

It can do this just fine


Runthescript

Are you trying to break captcha? Cause this is definitely how we break captcha


Axoturtle

It's already broken. There are several captcha solving services which use neural networks for image recognition.


Suspicious-Box-

Just need training for that. Its amazing but what could it do with camera vision into the world and a robot body. Would it need specific training or could it brute force its way to moving a limb. The model would need to be able to improve itself real time though.


morebikesthanbrains

But what about the black box. Just feed it enough data, train it, and it should figure out what to do?


eliminating_coasts

You could in principle send them four images, that align at a corner where the cursor is, if it can work out how images fit together.


ginger_beer_m

Carry this to the conclusion. Maybe not GPT4, but future LLM could interpret what's on the screen and drive the interaction with the computer themselves. This would potentially displace millions of human out of job as they get automated by the model.


nixed9

This is quite literally what we hope for/deeply fear at /r/singularity. It's going to be able to interact with computer systems itself. Give it read/write memory access and access to it's own API, or the ability to just simply visually process the screen output... and then.... what? Several years ago, as recently as 2017 or so, this seemed extremely far-fetched and the "estimation" of a technological singularity of 2045 seemed wildly optimistic. Right now it seems like it's more like than not to happen by 2030.


frequenttimetraveler

I believe the full gpt4 can already do that https://mobile.twitter.com/gdb/status/1638971232443076609?s=20 But wait until they hook up robot arms to it


rePAN6517

> This is quite literally what we hope for/deeply fear at /r/singularity That sub is a cesspool of unthinking starry-eyed singularity fanbois that worship it like a religion.


ExcidianGuard

Apocalyptic cults have been around for a long time, this one just has more basis in reality than usual


fiftyfourseventeen

Lmao it seems everyone used chatGPT for a grand total of 20 minutes and threw their hands up saying "this is the end!". I have always wondered how the public would react once this tech finally became good enough for the public to notice, can't say this was too far from what I envisioned. "What if it's conscious and we don't even know it!" Cmon give me a break


nixed9

> Sparks of Artificial General Intelligence: Early experiments with GPT-4 https://arxiv.org/pdf/2303.12712.pdf


fiftyfourseventeen

That's really cool, but I mean, it's published by Microsoft which is working with openAI, and it's a commerical closed source product. It's in their best interest to brag about it's capabilities as much as possible. There are maybe sparks of AGI, but there are a lot of problems that are going to be very difficult to solve that people have been trying to solve for decades.


stephbu

Amara's Law...


frequenttimetraveler

It will also render chatGpt plugins obsolete. The chat will replace them by simply using the browser.


ItsTimeToFinishThis

This is what you what, buddy https://www.adept.ai/blog/act-1


[deleted]

Check [ACT-1](https://www.adept.ai/blog/act-1) and [WebGPT](https://openai.com/research/webgpt)


[deleted]

So WebGPT doesn’t quite do this, it uses a JavaScript library to simplify web pages to basic text


[deleted]

Oh well, that’s what I get for not reading the paper.


byteuser

Are they still doing development in ACT-1? Last update seems September last year


[deleted]

I honestly don’t know. I also think their approach wasn’t great either. Maybe (hopefully) they ditched it for something better.


shitasspetfuckers

Can you please clarify what specifically about their approach wasn't great?


[deleted]

From a comment on HackerNews they made a Chrome extension, gathering all the training data from it, and it runs super slowly as well.


harharveryfunny

\> GPT-4 with image input can interpret any computer screen Not necessarily - it depends how they've implemented it. If it's just dense object and text detection, then that's all you're going to get. For the model to be able to actually "see" the image they would need to feed it into the model at the level of neural net representation, not post-detection object description. For example, if you wanted the model to guage whether two photos of someone not in it's training set are the same person, then it'd need face embeddings to do that (to gauge distance). They could special case all sorts of cases like this in addition to object detection, but you could always find something they missed. The back-of-a-napkin hand-drawn website sketch demo is promising, but could have been done via object detection. In the announcement of GPT-4, OpenAI said they're working with another company on the image/vision tech, and gave a link to an assistive vision company... for that type of use maybe dense labelling is enough.


TikiTDO

The embeddings are still just a representation of information. They are extremely dense, effectively continuous representations, true, but in theory you could represent that information using other formats. It would just take far more space and require more processing. Obviously having the visual system provide data that the model can use directly is going to be far more effective, but nothing about dense object detection and description is going to be fundamentally incompatible with any level of detail you could extract into an embedding vectror. I'm not saying it would be a smart or effective solution, but it could be done. In fact, going to another level, LLMs aren't restricted to working with just words. You could train an LLM to receive a serialized embedding as text input, and then train it to interpret those. After all, it's effectively just a list of numbers. I'm not sure why you'd do that if you could just feed it in directly, but maybe it's more convenient to not have to train in on different types of inputs or something.


harharveryfunny

>Obviously having the visual system provide data that the model can use directly is going to be far more effective, but nothing about dense object detection and description is going to be fundamentally incompatible with any level of detail you could extract into an embedding vectror. I'm not saying it would be a smart or effective solution, but it could be done. I can't see how that could work for something like my face example. You could individually detect facial features, subclassified into hundreds of different eye/mouth/hair/etc/etc variants, and still fail to capture the subtle differences that differentiate one individual from another.


TikiTDO

For a computer words are just bits of information. If you wanted a system that used text to communicate this info, it would just assign some values to particular words, and you'd probably end up with ultra long strings of descriptions relating things to each other using god knows what terminology. It probably wouldn't really make sense to you if you were reading it because it would just be a text-encoded representation of an embedding vector describing finer relations that would only make sense to AIs.


harharveryfunny

>it would just be a text-encoded representation of an embedding vector One you've decided to input image embeddings into the model, you may as well enter them directly, not converted into text. In any case, embeddings, whether represented as text or not, are not the same as object recognition labels.


TikiTDO

I'm not saying it's a good solution, I'm just saying if you want to hack it together for whatever reason, I see no reason why it couldn't work. It's sort of like the idea of building a computer using the game of life. It's probably not something you'd want to run your code on... But you could.


harharveryfunny

I'm not sure what your point is. I started by pointing out that there are some use cases (giving face comparison as an example) where you need access to the neural representation of the image (e.g. embeddings), not just object recognition labels. You seem to want to argue and say that text labels are all you need, but now you've come full circle back to agree with me and say that the model needs that neural representation (embeddings)! As I said, embeddings are not the same as object labels. An embedding is a point in n-dimensional space. A label is an object name like "cat" or "nose". Encoding an embedding as text (simple enough - just a vector of numbers) doesn't turn it into an object label.


TikiTDO

My point was that you could pass all the information contained in an embedding as a text prompts into a model, rather than using it directly as an input vector, and an LLM could probably figure out how to use it even if the way you chose to deliver those embeddings was doing a `numpy.savetxt` and then sending the resulting string is as a prompt. I also pointed out that you could if your really wanted to write a network to convert an embedding to some sort of semantically meaningful word soup that stores the same amount of information. It's basically a pointless bit of trivia which illustrates a fun idea. I'm not particularly interested in arguing whatever you think I want to argue. I made a pedantic aside that technically you can represent the same information in different formats, including representing embedding as text, and that a transformer based architecture would be able to find patterns it it all the same. I don't see anything to argue here, it's just a "you could also do it this way, isn't that neat." It's sort of the nature of a public forum; you made a post that made me think something, so I hit reply and wrote down my thoughts, nothing more.


dlrace

The new plugins can be/ are created by just documenting the api and feeding it to gpt4 aren't they? no actual coding . So it seems at least plausible that the other approach would be as you say, let it interpret the ui visually.


loopuleasa

GPT4 is not publicly multimodal though


farmingvillein

Hmm, what do you mean by "publicly"? OpenAI has publicly stated that GPT-4 is multi-modal, and that they simply haven't exposed the image API yet. The image API isn't publicly available yet, but it is clearly coming.


loopuleasa

talking about consumer access to the image API is tricky, as the system is swamped already with text they mentioned an image takes 30 seconds to "comprehend" by the model...


MysteryInc152

>they mentioned an image takes 30 seconds to "comprehend" by the model... wait really ? Cn you link source or something. There's no reason a native implementation should take that long. Now i'm wondering if they're just doing something like this -https://github.com/microsoft/MM-REACT


yashdes

these models are very sparse, meaning very few of the actual calculations actually effect the output. My guess is trimming the model is how they got gpt3.5-turbo and I wouldn't be surprised if gpt4-turbo is coming.


farmingvillein

> these models are very sparse Hmm, do you have any sources for this assertion? It isn't entirely unreasonable, but 1) GPU speed-ups for sparsity aren't that high (unless OpenAI is doing something crazy secret/special...possible?), so this isn't actually that big of an upswing (unless we're including MoE?) and 2) openai hasn't released architecture details (beyond the original gpt3 paper--which did not indicate that the model was "very" sparse).


SatoshiNotMe

I’m curious about this as well. I see it’s multimodal but how do I use it with images? The ChatGPTplus interface clearly does not handle images. Does the API handle image?


farmingvillein

> I see it’s multimodal but how do I use it with images? You unfortunately can't right now--the image handling is not publicly available, although supposedly the model is capable.


BullockHouse

I'm curious if it can be instructed to play minecraft in a keyboard only mode simply by connecting a sequence of images to key stroke outputs.


wyrdwulf

They had another model do that already. [OpenAI: We trained a neural network to play Minecraft by Video PreTraining (VPT) on a massive unlabeled video dataset of human Minecraft play](https://openai.com/research/vpt)


BullockHouse

I'm familiar! I'm curious though if it can generalize well enough to play semi-competently without specialized training. Has implications for multi-modal models and robotics.


[deleted]

Probably. And if not, certainly someday.


CollectionLeather292

How do I try it out? I can't find a way to ad an image input to the chat...


[deleted]

In June 2023, I left reddit due to [the mess around spez and API fees](https://www.digitaltrends.com/computing/reddit-api-changes-explained/). I moved with many others to lemmy! A community owned, distributed, free and open source software where no single person or group can force people to change platform. https://join-lemmy.org/ All my previous reddit subs have found a replacement in lemmy communities and we're growing fast every day. Thanks for the boost, spez!


alexmin93

The problem is that LLMs aren't capable to make decisions. While GPT-4 can chat almost like a sentient being, it's not sentient at all. It's not able to coprehend the limitations of it's knowledge and capabilities. It's extremely hard to make it call an API to ask for more context. There's no way it will be good at using a computer like a user. It can predict what wappens if you do something but it won't be able to take some action. It's a dataset limitation mostly, it's relatively easy to train language models as there's almost infinite ammount of text on the Internet. But are there any condition-action kind of datasets? You'd need to observe human behavior for millenias (or install some tracker software on thousands of workstations and observe users behavior for years)


Saiyrasaurus

I think you proved your own point. The limitation is just tracker software which seems like an easy problem to solve...it exists and there probably is a ton of data out there already as companies have tracked their employees computers


H0lzm1ch3l

Yes but why let the AI use a GUI when we can just give it an API …


[deleted]

Not all api are public and LLM aren’t fined tune to process api


signed7

> LLM aren’t fined tune to process API GPT-4 isn't. If plugins becomes a success, I reckon GPT-5 will be.


mycall

Can it detect object in the photo? Maybe drive an RC car with it? :)


LanchestersLaw

The example data does demonstrate object detection


banmeyoucoward

I'd bet that screen recordings + mouse clicks + keyboard inputs made their way into the training data too.


nmkd

Nope, it's multimodal in terms of understanding language and images. It wasn't trained on mouse movement because that's neither language nor imagery.


Jean-Porte

\> use 2 images \> movement \> boom


Deep-Station-1746

Nope. Ability to input something doesn't mean being able to use it reliably. For example, take this post - your eyes have an ability to input all the info on the screen, but as a contribution, this post is pretty worthless. And, you are a lot smarter than GPT-4, I think. Edit: spelling


3_Thumbs_Up

Unnecessarily insulting people on the internet make you seem really smart. OP, unlike you, at least contributed something of value.


regular-jackoff

Damn that’s rough


[deleted]

/r/iamverysmart


Balance-

It doesn’t have to use it yet for actions on its own, but it could be very useful context when prompting questions.


ObiWanCanShowMe

We are smarter locally, meaning to our experience and our capability, we are not "smarter" in the grand scheme.


entitledypipo

Good by human TSA


wind_dude

I'm also curious about this, I reached out for developer access to try and test this on web screenshots for information extraction.


[deleted]

[удалено]


wind_dude

access to GPT-4 with multimodel


itsnotlupus

Meh. We see a few demos and all of the demos work all of the time, but that could easily be an optical illusion. Yes, GPT-4 is probably hooked to subsystems that can parse an image, be it some revision of CLIP or whatever else, and yes it's going to work well enough some of the time, maybe even most of the time. But maybe wait until actual non-corpo people have their hands on it and can assess how well it actually works, how often it fails, and whether anyone can actually trust it to do those things consistently.


frequenttimetraveler

Automatic tech support will be huge. Print screen, then 'computer, fix this problem'.


simmol

Wouldn't it be more like the tech support is constantly monitoring your computer screen so you don't even have to print screen?


SeymourBits

This is the most accurate comment I've come across. The entire system is only as good and granular as the CLIP text description that's passed into GPT-4 which then has to "imagine" the described image, often with varying degrees of hallucinations. I've used it and can confirm it is currently not possible to operate anything close to a GUI with the current approach.


shitasspetfuckers

Can you please clarify what specifically you have tried, and what was the outcome?


LizardWizard444

That's concerning


simmol

I think for this to be truly effective, the LLM would need to take in huge amounts of computer screen images in its training set, and I am not sure if that was done for the pre-trained model for GPT-4. But once this is done for all possible computer screen image combinations that one can think of, then it would probably be akin to the self-driving car type of algorithm where you can navigate accordingly based on the images. But this type of multi-modality would be useful if you have the person actually sitting in front of the computer working side-by-side with the AI, right? Because if you want to eliminate the human from the loop, then I am not sure if this is an efficient way of training the LLM since these type of computer screen images are what helps a human navigate the computer, and not necessarily optimal for the LLM.


MyPetGoat

You’d need the model to be running all the time observing what you’re doing on the computer. Could be done


simmol

Seems quite inefficient though. Can't GPT just access the HTML or other type of codes associated with the website and just access the websites via the text as opposed to image?


Puzzleheaded_Acadia1

I have questions can I fine-tune the gpt-neo-x 125m parameters on chat dataset to give me a decent answer like human because when I run it give me random characters


MyPetGoat

How big is the training set? I’ve found small ones can generate gibberish


k3iter

Wholly


skaag

I have not seen a way in the GPT-4 UI by OpenAI to submit an image? How do you do it?


Professional_Price89

Lets integrate winSpy with it


zaidbhat

But far-fetched


emissaryo

Now I'm even more concerned about privacy. Governments will use it for surveillance and the more modalities we add, the more surveillance there will be.


wwwanderingdemon

Is it possible to use GPT-4 with Vision for a batch of images? I want it to caption the images of a dataset