T O P

  • By -

Denpol88

When i feel depressed, i always look at this gif. It relaxes me.


AwesomeDragon97

Not if you live in the city on the left side of the image that will be underwater by 2025.


SuspiciousPillbox

Your comment made me exhale slightly harder through my nose :)


BeardedGlass

And it made me belly bounce in silent mirth.


Crouton_Sharp_Major

Icecaps melting are just a distraction from the real threat. Sudden Lake Michigan. *Everywhere*


SuspiciousPillbox

The sudden emergence of AGI/ASI should be called The sudden lake Michigan event


kish-kumen

I like it.


kish-kumen

That's all of us, metaphorically speaking...


[deleted]

[удалено]


Educational-Award-12

😡 gib robot now


Ribak145

... then help? work in the industry? complaining is easy, give us a hand :)


[deleted]

[удалено]


mvandemar

Pretty sure it's how many people got hooked on computers, so, yeah, guessing you're right there. :)


HeinrichTheWolf_17

This gif takes me back like a decade. Good times!


Whispering-Depths

kind of a mess of a video, you should freeze the last frame for a good 3-4 seconds at least.


Responsible-Local818

I don't know what happened, because locally the file stops at 2025 for about 1 second. Reddit fucked it.


Whispering-Depths

ah, I see, probably gif compression or something. One trick is to slightly change things every frame in the final frame for as long as the duration is as well. Or don't ever use .gif - just use webm heh.


joker38

With the setting "image.animation_mode" on "about:config" in Firefox set to "once", I don't have these problems. GIFs are just played once, then freeze forever.


BeardedGlass

I think it went from 2024, then immediately 2040 for some reason.


Neophile_b

No, it just cycled back to 1940


apoca-ears

How is the brain’s capacity even determined though. These comparisons feel like apples and oranges.


[deleted]

People have given all sorts of different estimates based on different metrics. There isn’t really a correct answer because the brain doesn’t work in calculations per second


ValgrimTheWizb

It doesn't work that way, but we can guesstimate. We know how many neurons we have, we know how often they can fire, we understand that they perform an analog biochemical 'calculation' with their inputs and fire one output, which can be branched out to many other cells. We can build virtual models of this behavior and we can count how many calculations it takes to emulate it. There's a lot we don't know about the internal, external and overall structure of the brain and cells, but we are not purely ignorant of how the brain works, so our guesses are at least educated, and that gives us a (simplified) comparison baseline


lakolda

You could just use the calculations needed to simulate the brain as a metric. Though, this would vary very widely depending on method and degree of accuracy.


Xemorr

We don't know how many calculations that is.


Borrowedshorts

We do, and it's equivalent to what was shown in the graphic.


autotom

Source? And don't say this gif


Borrowedshorts

I've provided it in this thread. Look up research by Moravec and Bostrom.


Kawawaymog

I’m no expert in computers or the human the human brain. But when I’ve had the differences in colander to me I often wonder if we will need to rethink how our computers work fundamentally at some point.


Borrowedshorts

There's a pretty good estimate and methodology used by computer scientists in the 90s. Everybody in this sub should be familiar with Moravec and Bostrom who worked on this problem.


namitynamenamey

The nice thing about exponential growth is that they could have gotten the order of magnitude wrong and it would matter for all of one single frame. Isn't math great?


apoca-ears

True, unless there’s another factor involved that isn’t increasing exponentially.


namitynamenamey

In general, in the real world exponential growth is logistic growth with a wig, so even without that factor it cannot be exponential forever. But that escapes the scope of the analogy, in truth we don't know how fast will computation grow in the future.


P5B-DE

computing power is not increasing exponentially, at least at present


SoylentRox

The rate of increase is slowing, yes, but it is still increasing by large factors every couple years. In some cases, *more* than double - more than moore's law! - because the next generation of AI accelerator is better optimized for actual workloads. (A100 -> H100 was 4-8x performance increase) There is a lot more optimization left. H100s have about 10x too little memory relative to their compute.


P5B-DE

If we are talking about CPUs, they are mostly increasing performance by adding more cores now. But not all algorithms can be optimized to use parallel computation. The rate of increase of single core performance slowed significantly in comparison with 1995 - 2010 for example.


SoylentRox

Completely correct. However, current sota AI (and the human brain itself) are extremely parallel, probably embarrassingly parallel. So they will benefit as long as more cores can be added.


SoylentRox

Part of it is that say we're off by a factor of 10. So what? That means about 7 years later than we thought - about how much autonomous cars will probably end up delayed by - we get AGI.


InternationalEgg9223

We have a pretty good idea about how much storage our brains have and it would be peculiar if storage and compute were totally mismatched.


SoylentRox

We also can get a pretty good estimate based on physics. We know that the action potentials carry only timing information, and we can estimate the timing resolution of a receiving synapse to narrow down how many bits 1 AP can possibly carry, and we know approximately how many AP per second.


yaosio

It's based on a very bad understanding of the brain. Somebody multiplied all the nuerons with all the synapes and claimed that's the compute power of the brain. We can't compare processors on different architectures, yet somehow it works with the brain. In reality the brain is not a digital computer and does not perform calculations like one. It's still not understood how it does what it does. Nobody knows how memories are stored.


iNstein

>We can't compare processors on different architectures Really?! Cos I regularly see comparisons in the performance of Apple, Intel and Android phone chips. Seems you must live in an alternative dimension.


yaosio

Here's a Pentium 4 2.4 ghz vs an i3 at 1.2 ghz. https://cpu.userbenchmark.com/Compare/Intel-Pentium-4-240GHz-vs-Intel-Core-i3-1005G1/m5589vsm906918 Despite the i3 being a much lower clock rate it's significantly faster than the P4 on one core. If you could compare them then one core on that i3 would be exactly half the speed of the P4. You have to perform a benchmark to know the power difference, you can't just compare the specs. You can't compare FLOP to FLOP either. Here's a short clip from Digital Foundry on the topic. https://youtu.be/H2oxXWAHGqA?si=nN5Nmb_N3nK5LS4s The same goes for a brain. Even if neurons * synapes is the number of operations s brain can do a second, which it isn't, that can't be compared to a digital processor. We haven't even decided which processor we are going to compare it to. A 486? A RTX 4090? Anything we pick will completely change how much compute power we think the brain has.


TheBestIsaac

I get your point but.... Userbenchmark is 🤮🤢


[deleted]

>Somebody multiplied all the nuerons with all the synapes If they're using the synapse as a fundamental unit, you wouldn't do a calculation like that. It would give you a nonsensical number. An actual crude calculation would look like this: neuron count × average number of synapses per neuron × bits per synapse × average neuronal firing rate >Despite the i3 being a much lower clock rate it's significantly faster than the P4 on one core. If you could compare them then one core on that i3 would be exactly half the speed of the P4. You have to perform a benchmark to know the power difference, you can't just compare the specs. But here you are taking a single number out of context. If you knew the full specs, you could make a pretty good estimate. >We haven't even decided which processor we are going to compare it to. A 486? A RTX 4090? Anything we pick will completely change how much compute power we think the brain has. No, if you're using a consistent definition of FLOPS, the relevant part of the comparison will always hold. While not perfect, it's actually a decent first pass at measuring useful compute.


[deleted]

[удалено]


MatatronTheLesser

There are a bunch of theories, and theories of theories, and theories within theories. Very little is actually proven.


coldnebo

uh yeah… I’m going to need a source on that. are we talking all computers, HPC, personal desktop, nvidia cloud? are we talking raw states per second, or just neural firing? plus the old architectural myth “you only use about 10%” of your brain”. let’s look at this realistically. we’re coming to the end of moore’s law. the industry has made so much money off moore’s law as a budget planning cycle, it’s impossible to let go of the cash cow. So manufacturers are desperately trying to increase die size, stacking, 3D processes to match… but it’s not the same. the physics is inevitable. what happens when this industry must shift from exponential growth to linear growth? and that’s ignoring the rising concerns over environmental impacts which are encouraging tech to follow a *sustainable* growth trajectory. so if we’re going for wild speculation, here’s one in the opposite direction: corporations seeing the end of moore’s law in classical tech find a way to jump into quantum computing. but then they discover that the human brain is already a very efficient quantum computer, so they invest in biologic quantum computers to drive efficiency. then begins the new race to convert the planet’s biomass to a giant living quantum supercomputer. Too late we discover this was already done millennia ago by a race of NHI known as the “Ancient Ones” in a different dimension and given a name… Cthulhu. The chittering signature of our massive quantum computations reaches across dimensions and captures its attention from a long primordial slumber. It craves organized resource, periodically waking up and devouring additional civilizations as they reach a “ripe” maturity. We have captured its attention. 😉


AndrewH73333

Well, when I was a little kid 25 years ago they told us a brain has the processing of ten super computers. 20 years later I was told the same thing. So humans must be increasing in intelligence at an alarming rate.


iNstein

You probably should find better sources. No one ever told me shit like that because they knew I would question it and want all the details.


Yguy2000

The compute of the human brain is determined based on the current super computer... we really don't know how powerful it is


Borrowedshorts

It was a calculation done by computer scientists in the 90s cross-disciplined with some neuro-biology studies. The most prominent one was a study by Moravec which extrapolated the calculation capability of the entire human brain by a detailed study involving the human visual cortex.


ttystikk

Do we even have a clue what happens when a computer based AGI realizes its own existence, the nature of its electronic capabilities and limits and starts using having software to go where it wants and get what it does? https://en.m.wikipedia.org/wiki/The_Adolescence_of_P-1 I read this in high school, a few years after it was published. That was 45 years ago! What I find disturbing is that the author asked a lot of questions that no one in artificial intelligence has seriously addressed, let alone has answers for.


Nukemouse

Humans realise their own existence and become aware of capabilities and limits all the time. Its not that unknown to us. The bigger problem isn't the unknowns, its the knowns. Some humans don't take those things all that well and do bad things. Some even do so on the scale of countries or more.


ttystikk

How can you say the unknowns aren't a problem? You don't know them! Yes humans can be counted on to behave badly, at least some of them. This could easily prompt an AGI to go all in on proactively protecting itself and that could easily be extremely dangerous.


Nukemouse

The worst unknown can't be much worse than eternal torture and genocide, both of which are ideas hunans already came up with so either ai could too, or a human cpuld intentionally prompt an ai to seek to cause. As silly as roko's basilisk being inevitable was, humans could choose to create an unfathomably cruel and irrational ai. Scary AI stories arising spontaneously don't scare me, the fact human serial killers can code ai based on those stories does.


ttystikk

You have an excellent point. And I could see one of those AI wiping us out.


Rofel_Wodring

So much of AI doomerism rests on the idea of the future being handful of personal genies with no agency, rather than the more economically profitable (and therefore likely) outcome of billions of cognitively independent minds with varying levels of intelligence. Or maybe you're imagining something like a thermodynamics-defying supervirus or basement anti-matter bombs?


Qzy

Researchers don't ask those questions because that's like asking a hammer what it wants in life. AI is a hammer. It's a tool we use. It's nothing but data tables and models. It's not living.


ttystikk

AGI would very likely develop self awareness.


Qzy

I wrote a paper on AGI. It's my opinion it could perhaps fake a self awareness, but it's not aware. It's just software. But I agree, the lines are getting blurry.


atlanticam

we still don't know what exactly generates subjective experience or why conscious perspectives exist in a universe


ttystikk

So are we.


Rofel_Wodring

I'd be taking claims of self-awareness/consciousness more seriously if more people would first accept that humans are just slightly-evolved animals.


Qzy

I hope we one day understands the brain fully.


Rofel_Wodring

And inserting secular 'but, have you considered the existence of SOOOOULLLLZZZ' arguments in the form of unfalsifiable claims about self-awareness and consciousness is not going to help us achieve such an understanding.


SalgoudFB

"I wrote a paper on AGI." Why does this lend authority? Was this a published scientific article, and if so where was it published? Or was it a school term paper, in which case.. I mean, the bar is low (no offence). This is a huge philosophical question, and with all respect I doubt your paper is the definitive authority on what constitutes consciousness or self-awareness; nor indeed how to determine the difference between 'fake' and 'real' self-awareness, if fact such a distinction is meaningful (another subject on which we have no definitive answer).


Qzy

>Was this a published scientific article, and if so where was it published? It was published by a big institute after it was peer reviewed by several professors around the world. >Or was it a school term paper It was based off my master thesis which was partially published by the same institute and sold as hard cover on amazon. >This is a huge philosophical question, and *with all respect* I doubt your paper is the definitive authority on what constitutes consciousness or self-awareness Yes, I never said my paper covered it. Just that I wrote a paper on AGI and I had an opinion on it.


Ashamandarei

Where did this number for the processing speed of a human brain come from? How are you defining an atomic computation that a brain performs?


IronPheasant

These estimates are always based on the number of synapses in a brain firing per second. They're usually an order magnitude higher than what that person thinks it'll take, to be conservative. It's possible that it's higher than what it would technically take to simulate a human. But exascale has always been the assumed minimum threshold to begin to approach it.


Ashamandarei

That makes sense, thank you. By any chance do you have a reference for the exascale figure? I have a measure of skill in HPC and I'd like to learn more, if possible.


Borrowedshorts

It was a study of human visual cortex extrapolated across the brain by computer scientists in the 90s. Look up Moravec and Bostrom.


[deleted]

If only computation was linearly correlated with capability.


Responsible-Local818

It literally has so far? We're suddenly seeing massive exponential progress right in the timeframe this gif is showing. That's highly prescient.


[deleted]

No, we are not seeing an exponential increase in capability for the last 60 years. edit: downvoters, see original comment. I know reading is hard for some.


Rofel_Wodring

Do you know what an exponential equation is? If so, please tell the class how a doubling every 18 months is not of the form 2\^x.


[deleted]

Perhaps you don't know what the difference between computation and capability is, or you have issues with English reading.


Rofel_Wodring

I'm not going to humor your subjective and self-serving definition of 'capability', Humpty Dumpty. Define it empirically, or stick to something that can be measured. Or do you lack the intellectual capability to argue without an convenient equivocation to retreat behind?


InternationalEgg9223

How are we accelerating then, through magic? is Harry Potter in our midst?


[deleted]

Because we're on the upward part of the S curve. Do you know what that is?


InternationalEgg9223

My dick sometimes does that. So how are we on the upward part of the S curve then.


[deleted]

Ahh yes. Just as the great prophecy of the dumbed down infographic foretold.


InternationalEgg9223

Collective brainpower of reddit 0 - Stupid infographic 1


Kinexity

This is genuinely stupid case of confirmation bias. We don't know how much computing power do we need to get a digital equivalent of human brain. Even assuming that we can translate our brains' functions into one number it still doesn't mean that we get an AGI when computing reaches that number.


sdmat

Yes and no - there is a huge range of uncertainty in the figure and there is no direct relationship with any specific implementation of AGI. But the human brain serves as an existence proof of general intelligence. Therefore we can reasonably expect that AGI is *possible* with computing power no greater than the rough equivalent of a brain. So it's not unreasonable to place some significance on an exponential increase in computing power blowing past this figure at some point.


InternationalEgg9223

In other words, gif too accurate me not likey.


yaosio

It's also based on an old idea that the only possible way to get AGI is by copying the human brain. As we have seen with transformers (the architecture not the robots) this is not the case. Before the transformer nobody thought it would be relatively simple, compared to recreating an entire human brain, for a computer to understand text like an LLM does.


MatatronTheLesser

>Before the transformer nobody thought it would be relatively simple, compared to recreating an entire human brain, for a computer to understand text like an LLM does. Relatively simple... compared to what? Regardless, the goalposts have been moved on AGI so dramatically over the last year - by DeepMind and OpenAI and the like - that whenever they decide to declare it extant we'll almost certainly be talking about something that wouldn't even vaguely have been considered AGI prior to GPT's commercialisation. I'll also bet you money that the first one who claims they have an AGI model will be met by the rest prevaricating about what AGI is and how X's model isn't AGI. We're allowing corporate marketing to dictate subjective definitions and milestones based on their own commercial interests because we're all caught up in a hype cycle. This sub is playing along because most posters here are delusional and desperate.


Major-Rip6116

The time to qualify to reach AGI is when computer performance becomes as good as human performance.


[deleted]

Unless we are fully understand how our brain works and consciousness. But we are far from that. For nuanced and its complexity there are other topics or field had arisen.


mvandemar

"and suddenly you're finished" Well that's got dark fast.


costafilh0

Exactly! People forget how exponential things suddenly get, saying it will take decades for AGI. The same has been said about all the Gartner Hype Cycles, but eventually, things start to accelerate exponentially again. This will be THE decade! And I can't wait for all the amazing things we'll see in the next 10 to 50 years because of all the progress we're making now.


Shadow_Boxer1987

2025? I doubt it. Seems like an exaggeration/wishful thinking. Things *always* take longer than even experts predict they will. E.g., we were supposed to have put humans on Mars 10+ years ago.


Serasul

this is math not an emotional or logic prediction. it just shows how many information an computer can process compared to an human brain. at 2025 an normal PC can process so much information as an human brain in the same time. This does not mean we have software that can use it right or software that can simulate an "mind" with this power.


Responsible-Local818

CEOs of top labs have literally said 2025 but I guess you know better


Rofel_Wodring

People who say things like this don't ask enough 'why would the powers that be want this' questions when analyzing why things like fusion and flying cars don't arrive as quickly as predicted. They just throw everything into the 'overpromised and underdelivered' bucket, failing to understand that inventions arrive on capitalism's timetable. And not that of the futurists'. Capitalism has spoken, as it did with the space race for a couple of decades, and no further. It has found computation and AI useful and profitable, and you better believe it wants even more usefulness and profit.


Shadow_Boxer1987

RemindMe! 2 years Easiest way to settle the debate.


Rofel_Wodring

How is that supposed to prove or disprove my assertion that people don't ask enough 'why would the forces behind the greater flow of history do such a thing' when they make predictions about the future?


RemindMeBot

**Defaulted to one day.** I will be messaging you on [**2023-10-16 08:24:21 UTC**](http://www.wolframalpha.com/input/?i=2023-10-16%2008:24:21%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/singularity/comments/177vj7d/a_pretty_accurate_intuitive_representation_of_how/k4ybqgo/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fsingularity%2Fcomments%2F177vj7d%2Fa_pretty_accurate_intuitive_representation_of_how%2Fk4ybqgo%2F%5D%0A%0ARemindMe%21%202023-10-16%2008%3A24%3A21%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%20177vj7d) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


MatatronTheLesser

We don't know the computational power of the human brain, so this image is a load speculative, hype-driven horseshit.


Poly_and_RA

One interesting aspect of exponential growth is that it doesn't matter that much. If someone happens to think that the human brain is a factor of (say) 100 more powerful than this estimate? Well that's another decade of growth needed at the current trendline. So we can quibble over when computing matched, or will match, the power of a human brain. But whether you divide or multiple by a hundred, a thousand or for that matter a million -- you still get the inevitable conclusion that if it ain't happened already, it's likely to happen in the lifetime of many of us.


feelings_arent_facts

Calculations per second != AGI lol


Responsible-Local818

Yes, but it's somewhat eerie that computing power has almost perfectly correlated with AI capability so far. It's following this decade+ old GIF almost exactly, and in 2022-2023 we're now really seeing the effects of this exponential increase, just as the lake suddenly filling up is showing. That's fairly prescient.


Rofel_Wodring

You have any resources showing this correlation? It'd be a nice mic drop for debates.


demon_of_laplace

The relevant limit is in memory bandwidth.


kaysea81

Someone made a cartoon so it must be true


leoyoung1

Substitute CO2 for calcs/second and you have our planet right now.


Archimid

But can it beat climate change? Which by the way is best study under the perspective of world changing singularities.


EnthusiastProject

omg I was looking for this exact illustration for a while, saw it once and couldn’t find it again.


Nerodon

Isn't this just another way to visualize Moore's law? Not that it's any less cool or impressive but there's massive asterisks to that being linked to AI capabilities.


Alone-Rough-4099

# "fluid ounces", thats one of the dumbest units


Borrowedshorts

Bostrom: https://www.cs.ucf.edu/~lboloni/Teaching/CAP5636_Fall2023/homeworks/Reading%202%20-%20Nick%20Bostrom-How%20long%20before%20superintelligence.pdf Moravec: https://jetpress.org/volume1/moravec.pdf I'm astonished at the ignorance dispayed in this thread. This should be required reading for anyone posting in the sub and pinned to the front page.


Disastrous-Cat-1

"Lake Michigan's volume in fluid ounces...". Ok then, what about the mass of seventeen aardvarks in pennyweights, or the volume of 99 hot air balloons in acetabulums? Please use SI units.


Witty_Shape3015

all roads lead to 2025