Mountain Project Logo

Google claims this is a real story

J W · · Unknown Hometown · Joined May 2021 · Points: 479
Collin Hwrote:

I agree that it's intelligent, and I have at times been shocked by its performance. ChatGPT understands my PhD research better than 99.99% of people, and was able to correctly predict the results of an insanely complicated new analysis method based only on a description of how it worked and the nature of the data being analyzed. Both the method and data were unpublished, so reaching the right answer required some degree of genuine understanding and problem solving, and it was not a simple yes-or-no question. It's also great at coding, which definitely requires logic.

But would you seriously claim that it doesn't have profound deficits in other ways (metacognition) that make it unreliable and prone to falsehood? When it reaches its limit in any area, it behaves like a master BSer, far more so than a typical person. Few people will completely invent plausible-sounding details with no basis in reality as confidently, frequently, and eloquently as it does. Custom parameters and choosing the right model have helped to reduce the issue, but it's still pretty common in my experience. And sure, there are also humans that can't admit when they don't know something or recognize fake images/stories, but I wouldn't want them summarizing search results for me. The Google AI is dumb because it's a poor use of AI, not because AI can't be used in sensible ways (it also seems to be generally inferior to other options).

You're right that humans rarely have true insight into the nature of our thinking, as evidenced by the fact that we frequently remember things but have no idea where we got them from or if we thought of them ourselves. But I'm sure you also sometimes think "I'm confused" or "I'm not sure if I understand this". If you're talking to a person and you can't understand their argument, you might ask them to clarify or let them know what part you are struggling to understand. LLMs basically never do this. At their very best, they can recognize when there is inadequate information to provide an answer and don't invent random details to fabricate one, but they cannot recognize when they are struggling to understand the information that is there. This isn't based on a billion people compiling the very dumbest AI responses for a year, this is based on the responses it has given me during routine use and some unintentional AI fails I've seen in the wild (not ones that were shared as bad examples).

Just last week on another forum about nature/ecology, someone made a post about realistic ecological details in a popular video game, but they got a good portion of their info from ChatGPT. Alongside several other dubious facts, the post claimed the "cremasteric reflex" in horses was a "commonly cited detail" from the game. As a human with metacognition, I realized that I didn't understand this, so I looked it up and nearly died laughing when I saw the actual meaning (don't look it up at work). The moral of the story is to treat anything AI tells you with incredible skepticism. If you get complacent because of its intelligence, it will catch you off guard with something mind-bogglingly stupid that sounds smart, and you may find out the interesting fact you just shared is actually a semi-obscene falsehood about horse genitals (no shade to the OP from the other forum, just too good not to share).

If mp lets me post this, I’ll just say for now that I appreciate the thinking in your comment. I suspect our views, at least those expressed thus far, overlap almost entirely.

If there’s daylight between our positions, it may be due to your having a higher opinion of humans than I do. I do think we’re worth saving, despite our fervent efforts to make doing so impossible.

And yeah, it’s easy to set an AI to a task it’s ill equipped to perform and laugh when it fails. I’ll trust you to infer why nothing desultory about AI or complimentary about humans can be derived from this.

It may be that the vast majority of people have only used the free, significantly “dumber,” models.

Well, to invoke the now-ubiquitous tiger cub analogy, I sincerely hope no one is foolish enough to think that today’s relatively harmless play will endure forever.

Tigers grow up.

Daniel: I’m glad you brought up gravity—the one thing that we’re all so certain of, but about which the entire species is almost certainly wrong. See: science 1905-2026.

It’s kind of a fun irony that AI is made possible by employing a framework altogether at odds with what humans purport to understand about gravity. I know you’re up to speed, but for those who aren’t, the salient takeaway is that Christians and Muslims can’t both be right. One or both groups are wrong. Which is to say, string theorists may be attempting the Lord’s work of getting relativity to play nicely with quantum mechanics, but a century after the birth of both rival theories, we’re still at loggerheads with no discernible hope on the event horizon.

Incidentally, Daniel, you’re one of the people I had in mind when I gave my set of parameters to ChatGPT, specifically, your candidness, your implacability even when standing alone, and your seemingly inexhaustible capacity for nuance (much to the dismay of many on this forum!). Above all, because we disagree about many things.

Daniel Shively · · Unknown Hometown · Joined Sep 2024 · Points: 0
J Wwrote:

If mp lets me post this, I’ll just say for now that I appreciate the thinking in your comment. I suspect our views, at least those expressed thus far, overlap almost entirely.

If there’s daylight between our positions, it may be due to your having a higher opinion of humans than I do. I do think we’re worth saving, despite our fervent efforts to make doing so impossible.

And yeah, it’s easy to set an AI to a task it’s ill equipped to perform and laugh when it fails. I’ll trust you to infer why nothing desultory about AI or complimentary about humans can be derived from this.

It may be that the vast majority of people have only used the free, significantly “dumber,” models.

Well, to invoke the now-ubiquitous tiger cub analogy, I sincerely hope no one is foolish enough to think that today’s relatively harmless play will endure forever.

Tigers grow up.

Daniel: I’m glad you brought up gravity—the one thing that we’re all so certain of, but about which the entire species is almost certainly wrong. See: science 1905-2026.

It’s kind of a fun irony that AI is made possible by employing a framework altogether at odds with what humans purport to understand about gravity. I know you’re up to speed, but for those who aren’t, the salient takeaway is that Christians and Muslims can’t both be right. One or both groups are wrong. Which is to say, string theorists may be attempting the Lord’s work of getting relativity to play nicely with quantum mechanics, but a century after the birth of both rival theories, we’re still at loggerheads with no discernible hope on the event horizon.

Incidentally, Daniel, you’re one of the people I had in mind when I gave my set of parameters to ChatGPT, specifically, your candidness, your implacability even when standing alone, and your seemingly inexhaustible capacity for nuance (much to the dismay of many on this forum!). Above all, because we disagree about many things.

I didn’t claim that general relativity or quantum theory explains gravity, only that an effect is observable. I’m open to being wrong or uncertain about everything but as far as my perception indicates, something seems to keep “me” attached to the “earth”. Maybe the effect occurs because I observe it. Maybe the effect occurs because I “believe” it will occur. Whatever the explanation, we all seem to experience it. Maybe Buddhist Siddhis can overcome it.  

It’s interesting that you used the word “almost” before “certainly”. 

Reading that you thought of our interactions when setting your LLM parameters is something that I would never have anticipated happening, what an interesting timeline! I must say that it is somewhat flattering in a Dystopian sort of way.  Nuance is the enemy of dogma. 

Cheers!

Chris M · · Detroit, MI · Joined May 2025 · Points: 5

Best thread in a while. 

I don't have anything to add at the moment except that it is my hope that this whole AI era, and especially its flaws, will stimulate the bit of metacognition that has happened here. Obviously everyone that's contributed here is pretty intelligent, and ultimately we are sort of limited and differentiated by our raw brainpower, but either way maybe it will make us better as a species. 

Although I have seen plenty of evidence to the contrary, and I am a flawed creature whose view of humanity fluctuates based on his sleep, protein intake, fitness level, etc. Many people are already outsourcing their thinking to AI completely uncritically, and I think we are way too early to be outsourcing the thinking of even the dumbest people to AI. Here's a climbing-specific example I saw recently that you might enjoy. For context, the area this climb resides in is incredibly gatekept (the guide who takes people out there makes people sign an NDA to not reveal its location?!) so it's hilarious that the same guide reposted this AI slop from a client who was stoked to get out onto it. 

Sorry I know I'm kind of reiterating the basic points of this thread, and I know slop to repost and laugh at is pretty low hanging fruit nowadays, but we did start this thread talking about climbing AI slop and this is prime slop. 

amarius · · Nowhere, OK · Joined Feb 2012 · Points: 20
Chris Mwrote:

 Many people are already outsourcing their thinking to AI completely uncritically, and I think we are way too early to be outsourcing the thinking of even the dumbest people to AI. 

Well, you are not wrong. It is really difficult to understand why OP started this thread - distinguishing between trolling attempts and genuine discussion is quite challenging, but I interpreted the post as OP having difficult time understanding that the "news story" was AI slop, furthermore, he had no ability to understand that "research summary" was AI slop as well. The thread then became the typical  circle jerking event of raging against all the trigger issues that are popular on MP.

Anyways, let's continue the noble tradition - DoYoUrOwNrEsEaRcH!

Chris M · · Detroit, MI · Joined May 2025 · Points: 5
amariuswrote:
The thread then became the typical circle jerking event of raging against all the trigger issues that are popular on MP.

Not in my estimation. 

People who tell each other to "do their own research" are generally like ravers telling each other to not drive drunk. Everyone pretends they're doing the right thing. Naked emperor situation. 

Cherokee Nunes · · Unknown Hometown · Joined May 2015 · Points: 0

I thought you guys spun gold cloth from wheat chaff, personally. I enjoyed the commentary about AI.

If two people connect in what is for all practical purpose a troll thread, and have a good discussion as a result, the troll has been defeated

J W · · Unknown Hometown · Joined May 2021 · Points: 479
Cherokee Nuneswrote:

I thought you guys spun gold cloth from wheat chaff, personally. I enjoyed the commentary about AI.

If two people connect in what is for all practical purpose a troll thread, and have a good discussion as a result, the troll has been defeated

Which is why I’m always relieved when you weigh in.

gtluke · · Unknown Hometown · Joined May 2012 · Points: 1

it's hip to hate elon but Grok is far superior to especially google's AI.
Google's AI thinks it's real even when probed, I tried it.
I copied the image into Grok and asked and got:

is this a true story?Searching the web50 resultsSearching on XBrowsing web page1 resultNo, this doesn't appear to be a true story. The viral post (and similar versions circulating on social media like Facebook) about a climber in Colorado reaching into a rock wall/handhold and suddenly coming face-to-face with a hidden mountain lion (cougar) in its den seems to be fabricated or heavily exaggerated for clicks.Here's why:
  • Extensive searches for news reports, wildlife incidents, or verified accounts of this exact scenario (climber putting hand in rock hole → discovering mountain lion den → close face-to-face encounter) turn up nothing credible from reliable sources like Colorado Parks and Wildlife, local news outlets (Denver Post, Coloradoan, etc.), or official records.
  • Real mountain lion encounters in Colorado are well-documented — usually trail runners/hikers being stalked or ambushed on open trails, not climbers discovering dens by sticking hands into rock crevices. Famous cases (like the 2019 trail runner who fought off and killed a juvenile lion) involve very different circumstances, and none match this description.
  • The specific phrasing and dramatic video clip format (with captions like "A CLIMBER IN COLORADO REACHED INTO A ROCK WALL...") are common in viral animal content pages that often post AI-generated, staged, or recycled fakes. Accounts like "Roam & Roar" and similar wildlife meme pages frequently share sensationalized or outright fabricated stories to drive engagement — no mainstream or official wildlife sources reference this event.
Mountain lions do live in rocky areas and use caves/crevices as dens (especially mothers with kittens), and close wildlife encounters happen in Colorado, but this particular "hand-in-the-hole surprise" tale has all the hallmarks of internet fiction rather than a real incident.If you have the original video source link (not just screenshots), feel free to share — sometimes context helps spot edits or AI tricks! Otherwise, treat this one as classic viral sensationalism.
Stay safe out there in cougar country, though — real encounters are rare but do happen!
slim · · Unknown Hometown · Joined Dec 2004 · Points: 1,093

one of the many things i can't get over is how AI can supposedly magically solve all of these really complicated questions/problems, and when i ask it a simple question, like "what size wrench do i need for such and such?", and i give it specific part names, dates, numbers, etc - it confidently states the wrong answer. literally every.... single... time.

all i can say is pray for john connor.

Chris M · · Detroit, MI · Joined May 2025 · Points: 5
gtlukewrote:

it's hip to hate elon but Grok is far superior to especially google's AI.
Google's AI thinks it's real even when probed, I tried it.
I copied the image into Grok and asked and got:


I use Grok at least a couple times each week and it sucks for many things too. Even in expert mode. Which is strange since Elon is always touting metrics showing how much it blows everything else out of the water. Maybe the rest just sucks even more. Google AI summary is definitely horrible.

Interestingly Grok is incredibly good searching for specific topics in early Christian writings (not a Christian in anything beyond cultural allegiance just a special interest of mine). Really it's good at doing history in general. Maybe the source material is just broadly that much more rigorous. 

J W · · Unknown Hometown · Joined May 2021 · Points: 479
slimwrote:

one of the many things i can't get over is how AI can supposedly magically solve all of these really complicated questions/problems, and when i ask it a simple question, like "what size wrench do i need for such and such?", and i give it specific part names, dates, numbers, etc - it confidently states the wrong answer. literally every.... single... time.

all i can say is pray for john connor.

Have you talked to a human at REI or Backcountry and asked a technical question about climbing equipment?

Re: AI reliability 

The quickest way to get better responses is to pay for at least one step up from the free model.

Barring that, set some parameters for your chat (the number it can handle varies by AI and tier), one being that it has to provide sources that support any answer it gives you.

Daniel Shively · · Unknown Hometown · Joined Sep 2024 · Points: 0

I don’t think that I’ve ever mentioned this and will probably be labeled as naive or goofy or any of the myriad pejoratives MP so fondly bestows upon me but here goes. Fairly early on when LLM’s seemed more fantasy than reality, I decided to flood my cloud and other internet spaces with as much stoke and beauty as I could create (mostly photos). My intention was to provide insight about man’s interaction with nature, ourselves, and one another for the LLM’s to digest in hopes to balance the overall negativity found on many internet platforms. I briefly had an X account but the severe negative vibes quickly had me abandoning my efforts there. I hope that everyone realizes that a main function of social media is data mining to accumulate info for machine learning platforms. Maybe some will view my idea as a simple justification for ego based sharing, but I firmly know my true intentions. 

Anyone else consider this or is it just the machinations of a old mountain dweller? Thanks all for an interesting thread.

Collin H · · Unknown Hometown · Joined Jun 2020 · Points: 131

Slim: I think AI is an "intelligent idiot" with really high variability in its performance and a high propensity to BS. If you ask it only simple questions, it has plenty of opportunity to show how stupid it can be, but few to demonstrate its strengths. With more complex questions/tasks, it can go back and forth between demonstrating high ability (particularly at coding) and making the mistakes of an over-confident six year old with a good imagination and the vocabulary of an adult.

JW: I think we are generally on the same page as well (or at least substantially overlapping). I find popular views on the capabilities of LLMs often tend to fall into oversimplified extremes. Either "it's a text predictor and doesn't even understand what it's saying" or "it operates at the PhD level in all subjects, is already smarter than most people, and could replace most jobs today or in the next few years." It's not a simple subject, so it's hard to concisely express views that capture both the strengths and limitations of it. 

I also agree with your tiger cub comment, and have been highly concerned about where further advances in AI will leave us (potentially extreme economic, political, and social destabilization). I love people but have a generally cynical view of humanity, the direction we are headed, and the way AI will play into it.

Chris: To respond to your earlier question, I can really see things going either way. On good days, I hope that ubiquitous AI will leave people disillusioned and craving more authentic human expression and critical thinking. With a deluge of generic AI content that paraphrases and echoes human expression but rarely captures its essence, perhaps the real thing will seem to be relatively scarce and will be more highly valued and sought after. 

On cynical days, I think this will be lost in the noise, and that people will grow to accept AI content as a good enough substitute for the real thing so long as it's amusing, engaging, pretty, or molded to their sensibilities. Or that most people will become so heavily dependent on it that they lose the capacity for authenticity (i.e. they must always consult AI when attempting to develop their opinions/arguments, writing, art, and everything else). 

Or perhaps both will happen, with some people falling into the first group and others in the latter; maybe that is most realistic. In a desert of genuine human experience, some of use will end up in the oases where real people connect, both in person and online, while others will end up staring at mirages.

Daniel: Your AI replicant will live on in this brave new world, whether you like it or not! (In reference to JW's AI parameters)

Everyone else: Sorry for blathering wall-of-text!

Alan Rubin · · Unknown Hometown · Joined Apr 2015 · Points: 10

While I think that much of the 'dispute' above in this thread has been largely semantic--most specifically there has been no agreed definition of 'intelligence' in this context, at least no participant has specifically provided one.

However, I want to ask a question of those on here who are much more knowledgeable in this area than I am. Can an AI tool--Google, Gronk, whatever--actually have a sense of humor? I'm sure they can create 'jokes', possibly even routines---but can they really 'get' them? Somehow, to me, having a sense of humor is a sign of true, at least emotional, intelligence--and, yes, I strongly believe that many animals in addition to humans, seem to be gifted with a sense of humor.

slim · · Unknown Hometown · Joined Dec 2004 · Points: 1,093
Collin Hwrote:

Slim: I think AI is an "intelligent idiot" with really high variability in its performance and a high propensity to BS. If you ask it only simple questions, it has plenty of opportunity to show how stupid it can be, but few to demonstrate its strengths. With more complex questions/tasks, it can go back and forth between demonstrating high ability (particularly at coding) and making the mistakes of an over-confident six year old with a good imagination and the vocabulary of an adult.

i don't consider this intelligence. i consider this guessing. or, basically just luck.

Alan Rubin · · Unknown Hometown · Joined Apr 2015 · Points: 10

Thank you JW and whichever LLM you are using. While it will take me a LONG. time to fully absorb all this, my initial reaction, not unexpectedly, is that, at least your AI tool, understands what a sense of humor is, but, as it also understands, doesn't really 'have' one. I could ask the same question, of course, about love, hate, hurt, etc., but have no doubt that the responses would be fundamentally the same ( don't bother doing so!!!). To me, feelings, emotions, etc. are an essential part of what I feel make up 'intelligence' ( and these traits are not limited to humans but shared to some extent with at least some other living beings). That they are not really replicable as a basic aspect of otherwise very high functioning machines, is a fundamental factor that I find extremely worrisome with AI.

Cocoapuffs 1000 · · Columbus, OH · Joined Jun 2008 · Points: 50
J W wrote:

Finally, what you find worrisome is worrisome to me, as well. What worries me further is that the majority of the planet seems incapable or unwilling to consider the implications. That we are in an arms race to develop ever more capable models without first solving that and other problems holds the potential to alter societies in ways never before seen, or indeed, imagined.

I am also very pessimistic, based on two factors:

1) If it is possible to 'win' the race to a superintelligent AI, then there will be only one winner - whoever gets there first.  There is no second place.

2) Any sincere attempt to develop AI safely and responsibly will be slower than a 'reckless' approach.

IMO, all the big players competing in the AI race are rolling the dice with the future of the human race, and they know it.  In their minds, they have no other choice.

Chris M · · Detroit, MI · Joined May 2025 · Points: 5
Cocoapuffs 1000wrote:

1) If it is possible to 'win' the race to a superintelligent AI, then there will be only one winner - whoever gets there first.  There is no second place.

2) Any sincere attempt to develop AI safely and responsibly will be slower than a 'reckless' approach.

I don't know if I would confidently make either of these assertions. I don't think the ramifications of the development of "superintelligent AI" are all that different from any previous world-changing development, such as the internet or nuclear weapons/power. Everyone ended up winning out with the internet, and while the effects of the nuclear arms race led to mass proliferation which isn't great, we mostly ended up being otherwise ok. Curious what you think makes superintelligent AI so different? I also don't think "getting there first" is a clear-cut, singular event like crossing the line at a track race. 

Cherokee Nunes · · Unknown Hometown · Joined May 2015 · Points: 0

Everyone ended up winning out with the internet,

Yeah but did we though? Its debatable, but probably on time scales longer than we can manage here in the present. But I mean this: look at what technology is doing, actually doing, to the planet. How can anyone pretend technology is good for the planet earth? I'd say it definitely is not, good for the planet. Full stop. But I'm not going to kill myself, nor stop typing hahaha.

I understand your point however. What if technology, itself, were a virus? And what if that virus seeks only to replicate and evolve itself? Getting organisms to "do its bidding," where the bidding is "create more technology, also capable of replication." For this virus to be successful it must alter the behavior of the host so that the host replicates more of the virus, right?

There's your technological arms race, or a version of it.

We like to think we must "go to the stars" but what if its the virus, that wants to go to the stars (from whence it came)?

Technology begets itself, but only through its zombie hosts (us!). Complexity arises. The host cannot "go back," and its continued and in fact vastly extended lifespan is dedicated to the use and propagation of more technology and is in fact utterly dependent on it. Separate the virus from the host and the host dies, quickly. 

Gunkiemike · · Unknown Hometown · Joined Jul 2009 · Points: 3,722
Cherokee Nuneswrote:

Yeah but did we though? Its debatable, but probably on time scales longer than we can manage here in the present. But I mean this: look at what technology is doing, actually doing, to the planet. How can anyone pretend technology is good for the planet earth? I'd say it definitely is not, good for the planet. Full stop. But I'm not going to kill myself, nor stop typing hahaha.

I understand your point however. What if technology, itself, were a virus? And what if that virus seeks only to replicate and evolve itself? Getting organisms to "do its bidding," where the bidding is "create more technology, also capable of replication." For this virus to be successful it must alter the behavior of the host so that the host replicates more of the virus, right?

There's your technological arms race, or a version of it.

We like to think we must "go to the stars" but what if its the virus, that wants to go to the stars (from whence it came)?

Technology begets itself, but only through its zombie hosts (us!). Complexity arises. The host cannot "go back," and its continued and in fact vastly extended lifespan is dedicated to the use and propagation of more technology and is in fact utterly dependent on it. Separate the virus from the host and the host dies, quickly. 

Sounds like someone's been binge-watching "Pluribus".

Guideline #1: Don't be a jerk.

General Climbing
Post a Reply to "Google claims this is a real story"

Log In to Reply
Welcome

Join the Community! It's FREE

Already have an account? Login to close this notice.