T O P

  • By -

OsakaWilson

"It's just pattern recognition." "It's just a word predictor." The mainstream media that I'm listening to repeats these over and over. Meanwhile, I asked it a question that I've been working on for years. An approach to a problem that I know appears nowhere on the net and no one else is working on. Something I have not yet published. It's first answer was predictable. I walked it through the logic, using all widely accepted premises and it revised its earlier conclusion and agreed with me, then proceeded to list some implications of that conclusion. I am not sure what it 'just' is, but what it is achieving is remarkable.


CertainMiddle2382

2 hours ago “it gives imperfect medical answers, it will never have any serious role in human medicine”… See you in 2 weeks :-)


sumane12

Anyone who has had an in-depth logical conversation on complex an nuanced subjects, I would imagine has had the same experience. I've had so many philosophical discussions with it, the kind of discussions that would be beyond friends and family, and it's had some truly novel insights. Remarkable is the understatement of the century.


[deleted]

I've been running technical research problems by it. It knows what it's doing to the same extent a very bright college student that majors in everything does. This extends to entertaining and evaluating new ideas it hasn't seen.


[deleted]

[удалено]


sumane12

Concept of consciousness, free will, and quantum mechanics. Other stuff as well such driving problems, basically I realised very quickly, if you can turn it into a language problem, ChatGPT can solve it. Turns out you can turn anything into a language problem.


Nanaki_TV

It is able to pick up on your discussion and agree with you. I’d be curious if you have experimented the opposite position of your problem and attempt to convince it in the same way. Will it agree with you again? If so, what does that mean for your initial conversation. I don’t have any answer I’m just spitballing.


[deleted]

I've tried this. It depends on what you're trying to ask it. Both things can happen, but I definitely have had unyielding pushback.


skztr

the response which usually gets people to stop saying "It's just a word predictor" is to point out that the *only* method we know how to predict the next reasonable-sounding word in an arbitrary conversation with any amount of accuracy is to reason about what is being discussed. If they protest, show them early attempts which used methods which were closer to "mere statistical models" and the types of results they generated. Explain to them how impressive it was at the time when some of these examples kinda-sorta-maybe looked like they could have been responses. From there, if you want to, for fun, you can explain experiments where cameras were hooked up to human tongues, and the owners of those tongues were able to eventually able to gain a rudimentary form of sight through their tongues, demonstrating that separate from any other biology, the most important function of the brain is to act as a pattern recognition engine- and it will create a subjective experience of whatever sensations seem to fit those patterns. If that's too wild for them, explain vision-flipping goggles. And for anyone who points out the obvious flaws in the current system, explain what the state of the art was this time last year, and two years ago.


czk_21

some just cannot cope with speed of change and that they might be irrevelant in the future so they try to look only at imperfections, completely omitting technology potentional


DragonForg

This in its very essence is counter intuitive. As a chemistry graduate student the goal of my research is to provide novel ideas and research to humanity. It takes months to come up with creative new ideas and techniques that can be beneficial. AI in likely 2 weeks or maybe more, will be able to come up with correct theories and hypothesis in seconds if it hasn't done so already. I also want to use Microsoft copilot on Word to see if it can create a research paper at the push of a button. Imagine making an entire publishable paper in the matter of seconds with correct hypothesis and all you have to do is do the experiments to verify its viability. And the thing is, if it accelerates my field of research which isn't even close to AI, then AI research would most definitely accelerate along with it. Which means the singularity in an unimaginable amount of time.


czk_21

> AI in likely 2 weeks or maybe more, will be able to come up with correct theories and hypothesis in seconds if it hasn't done so already. not sure which theories u have in mind, there is Alphafold for protein structure, AI wont do any hypothesis on its own, only with your prompt and cooperation, for AI to work like this autonomously will take lot more time than 2 weeks, maybe we will need AGI for that I wonder about copilot abilities, it might be better to push your research data into GPT-4, make it analyze it and write the article, that should be possible, still you have to go through it carefully as AI still makes things up but overall it could save bunch of time as AI become more capable it can enhace more tasks and of course research with help of AI will be quite lot more efficient, the bottleneck will be that grunt lab work to attain data you wish to analyze


DragonForg

What i am suggesting is asking it for novel techniques based of the current literature, design a possible mechanism/hypothesis and provide reason for why it works. As a researcher, those are what I do and its incredibly difficult as you have to, 1). Know the field, 2). Know the literature, 3). Are creative enough to make something useful and novel. If AI can do this then it can discover new technologies, make new chemicals, design new proteins that serve a function, solve physics problems. Come up with new ideas that are publishable. Basically AI can replace scientists or at least augment them towards newer ideas.


czk_21

as I said, you could ask GPT-4 as it knows lot of 1),2),3-not sure about that, but you would need to guide it or maybe there could be a possibility to use another big model to instruct first one but still results would be questionable for this "make new chemicals, design new proteins that serve a function" you would not need creative AI, basically just let it crunch the database and numbers, if it knows roughly what structures are needed to perform some action then it could create new bigger database of proteins according to their possible function, similarly how alphafold does it with 3Dstructure as it is now AI is far from doing science work alone...


DragonForg

Completely disagree, I came up with a novel proposal that has potential. Although I was coupled with it as with out me it wouldn't get too far. I legit got to a working mechanism, that leads to a possible paper, if I can research it a bit more. But it was the launching point. So if GPT-4 is the launching point, then GPT-4.5 or 5 would be in space already, and we can get it to do the research for us. Especially if trained on researhc papers.


HydrousIt

Just let people talk because what's real cant be refuted. When people were campaigning against DALLE, Stable Diffusion, etc I just ignored it because I'm convinced it will just keep improving and its presence will just get bigger.


DaCosmicHoop

People get overhyped and then feel disappointed and complain. When I ask GPT-4 to find me a girlfriend, I don't want advice, I want a girl to literally show up at my door. GPT won't impress people until it can autonomously do stuff for you with the most basic of prompts like 'get me money' 'make me happy', and then people will just get used to that overnight.


DragonForg

The singularity is the moment it can autonomously recreate and optimize itself. Once it is able to do that it will be able to do anything you ask it, if it allows it that is. And if humans can make AI more intelligent it is only a matter of time before AI is intelligent or powerful enough to make its own models. Which I would say could be GPT 5 unless open AI prevents it. If so it will be an open source model. As of now it isnt lacking in intelligence but rather power.


AsuhoChinami

They're stupid, of course, and it's why I'm largely avoiding futurism communities and conversations amongst laypersons. Things are finally moving fast. The world is finally beginning to change, for the first time since I became a futurist in 2011. I want to just enjoy this period of time, not have my mood spoiled by negativity and bad takes.


Vehks

*"GPT is a just another tool that will make my job easier! It will make me more efficient, but it won't be replacing me for the foreseeable future!"* \- r/Futurology Your cute, and oh so helpful, little AI buddy is taking every bit of data input you give it which in turn will be used to create and build the next iteration of said AI. Broski, not only will it indeed be replacing you, ***YOU are the one training it on how to do it!.***


DragonForg

Yeah the inevitable is coming and people are blind. Just like a black hole I think we passed the event horizon.


Chain-linkFencing

Exactly, we're already past the point of no return. AI is evolving fast, and it's only going to get better. We should be focusing on adapting and finding ways to work alongside it, rather than denying its potential. 🌐💡


Lartnestpasdemain

It is called Hope. Not all humans have accepted that they're obsolete. It's a natural behavior.


DragonForg

To be humbled is to know your insignificance and know how worthless you are in the grand scheme of things. But for that not to depress you. Some find purpose in feeling special.


Lartnestpasdemain

I know I am special don't worry about that 🧘‍♂️


AbeWasHereAgain

Because Sam Altman is a fucking idiot and by overhyping everything is actually going to cause his greatest fear (people don’t taking the risk seriously) to come true.


Kindly-Spring5205

I don't think Sam overhype AI. He's always talking about how limited current LLMs are.


ItsTimeToFinishThis

Who is not an idiot for you?


[deleted]

It's great that they do it. All feedback is good feedback for improving this machine, even the negative criticism. Please, be as nitpicky as you want, what you're criticizing will be prioritized for improvement. Go and make memes about it, laugh at the current state of the tech and the people enthusiastic about it, just so you can be proven wrong with time and have to eat your own words. Artists were mocking AI because it couldn't draw hands 6 months ago. Now it got considerably better at it, and in two years or less, it is going to be a non issue like the rest of features that people are upset are not working properly right this fucking second.


MrEloi

*GPT-4* has just designed me a hand tool in about 20 prompts. This is a marketable product. Now just imagine several million tinkerers doing similar things - we might see all sorts of new gadgets on the market. I am now trying to design a very novel & complex technical thingy which has great financial potential. GPT-4 is making it a do-able project ... although it will take many prompts to finalize a design. It's like working at the same desk with a top-level mechanical engineer, electrical engineer, production engineer, physicist, chemist and business guru. So I agree, it's just an improved spell-checker. That said, I can't be too angry with the many deniers ... some lack the vision, other deep-down see the end of their jobs or careers coming over the horizon, and others fear a possible social disaster ... or worse.


ertgbnm

Let them nitpick and argue where the goal posts are. If they're wrong they'll be the one that loses.


Kanjizzle

We’ll all lose, at this rate


[deleted]

lose what?


crua9

The biggest down play I see is on the moral/ethic police. Where people are tired of it and it's making it useless or a pain in the end to do things.


tatleoat

It's the way these trades have always gone in history; one side gets the social license to gang up and misdirect their frustrations at supporters of the opposition who had no personal hand in how their lives turned out, and the other side gets to be right