Mountain Project Logo

Quad anchor is not redundant at the clip in point?

Mark Pilate · · MN · Joined Jun 2013 · Points: 25
DrRockso RRGwrote:

I think even the data from what you describe would be skewed Mark, both because people would be hyper aware of getting the equalization just right vs real life (hopefully you're not telling them what the purpose of the study is, though if they see you putting load cells in it might be pretty obvious for some) and the fact that the direction of load is often changing….

I thought about this, but in my mind the pertinent data is actually how good can you make it, statistically, when you are trying the best you can.  Essentially establish the upper limit.  It’s difficult to control for “carelessness” as that may or may not apply to any specific person.  

If you wanted, you could probably get this number doing it yourself by just being quick and careless yourself.  This would be easier to simulate I would think by just tying them as quick as possible.  

I’d bet the data would inverse for experienced veterans vs noobs on whether they are trying to do it well, vs how they do it in real life.  With the “microscope” on, I bet the vets do better on avg, and in real life, I bet they’re more cavalier and sloppy vs a noob and make them “worse” on avg  

Then the question is how much does it matter?

This seems like a winter gym project to collect data when I’ll have more free time.  After all, might as well have those load cells out collecting data vs in my basement collecting dust. 

Gregory Cooper · · Phoenix, AZ · Joined Oct 2014 · Points: 2,967

Derek, thought I recognized that name! I run into Carl in the wild and now you on MP. Doesn't surprise me that I would find you posting nerdy stuff about anchors. I think there needs to be a huge push for the AMGA, or better the IFMGA to establish a peer reviewed journal where information can be shared at a central point. Access can be given to members or certified guides for free, and maybe even universities. 

If it isn't obvious by this thread that there is a strong need for scientific studies and or strong theoretical papers discussing best practices. So much of what we learn in these three disciplines is handed down to us via telephone. We trust those who teach us because they are the SEMs with acknowledging that they could be wrong, or don't have strong theoretical understanding of what they are teaching. What's worse is humans are lazy, and maybe we don't keep ourselves up to day on best practices. As someone mentioned in this thread about the John Long book some information might be really old. I even remember during the SPI course last year you and Carl mentioned that the SPI handbook is already out of date in regards to some of the best practices. Maybe a better handbook needs to be written that is really solid by the AMGA or IFMGA that serves as a base for this knowledge.) Then, new information can be published in the journals and the handbooks updated or at least reviewed every 5 years if need be). This isn't a sly against Gaines or Martin, but if the AMGA wants to call itself a leader in education, standards, and certification we really need to due better in this regard.

Another benefit something like this would be that it's central. You posted a link to an article that you wrote for Weber University. I would have never thought to look for such an article there and I'm sure a Google search would not have even pulled something like that up. However, the AMGA/IFMGA could tell all it's members: "hey, if you want to look at what's the latest and greatest just look at the website and search the Journal of Climbing Expeditionary Practices (working title) at www.amga.com/publications/journalof yada yada. Not spread out over MP, or multiple other journals, or articles on websites, or someones blog. I think Tech Tuesday on Instagram is the closet thing I've seen.

Additionally, books that are then published by the AMGA/IFMGA can finally start referencing these studies! So when a student is curious as to why it's recommended to do something or not, they can actually look up the actual science or theoretical framework. Or when a student like me, who asks you, why and when is it ok for a client to clip to one strand of a fixed point anchor even though it might create a FF5 or FF7 you can say - because this study found blah blah blah. 

Granted, having an actual peer reviewed journal might exclude some people, not everyone has the benefit being able to understand the scientific or mathematical jargon, or at least interpret it. This can however, be alleviated with a well written abstract and conclusion written in plain english. Not every article needs to be a quantitative study, but definitely needs to be heavily referenced. libguides.ucmerced.edu/c.ph…;p=4495850

Notice how when you came in with a very clear and concise well thought out explanation with citations everyone stopped and basically shut up and stopped bickering. The arguments of Because I don't like it! or we've always done it this way! or I was taught this way! It wasn't until then that people stopped to challenge their own assumptions. For me, this is the biggest reason for having something like this. Of course the trolls will troll, and the people who only know how to black hat will still just black hat. We can just ignore them, the rest of us reasonable people will challenge our assumptions and learn something.

Anyway, just some thoughts.

Ricky Harline · · Angel's Camp, CA · Joined Nov 2016 · Points: 147
Gregory Cooperwrote:

Derek, thought I recognized that name! I run into Carl in the wild and now you on MP. Doesn't surprise me that I would find you posting nerdy stuff about anchors. I think there needs to be a huge push for the AMGA, or better the IFMGA to establish a peer reviewed journal where information can be shared at a central point. Access can be given to members or certified guides for free, and maybe even universities. 

If it isn't obvious by this thread that there is a strong need for scientific studies and or strong theoretical papers discussing best practices. So much of what we learn in these three disciplines is handed down to us via telephone. We trust those who teach us because they are the SEMs with acknowledging that they could be wrong, or don't have strong theoretical understanding of what they are teaching. What's worse is humans are lazy, and maybe we don't keep ourselves up to day on best practices. As someone mentioned in this thread about the John Long book some information might be really old. I even remember during the SPI course last year you and Carl mentioned that the SPI handbook is already out of date in regards to some of the best practices. Maybe a better handbook needs to be written that is really solid by the AMGA or IFMGA that serves as a base for this knowledge.) Then, new information can be published in the journals and the handbooks updated or at least reviewed every 5 years if need be). This isn't a sly against Gaines or Martin, but if the AMGA wants to call itself a leader in education, standards, and certification we really need to due better in this regard.

Another benefit something like this would be that it's central. You posted a link to an article that you wrote for Weber University. I would have never thought to look for such an article there and I'm sure a Google search would not have even pulled something like that up. However, the AMGA/IFMGA could tell all it's members: "hey, if you want to look at what's the latest and greatest just look at the website and search the Journal of Climbing Expeditionary Practices (working title) at www.amga.com/publications/journalof yada yada. Not spread out over MP, or multiple other journals, or articles on websites, or someones blog. I think Tech Tuesday on Instagram is the closet thing I've seen.

Additionally, books that are then published by the AMGA/IFMGA can finally start referencing these studies! So when a student is curious as to why it's recommended to do something or not, they can actually look up the actual science or theoretical framework. Or when a student like me, who asks you, why and when is it ok for a client to clip to one strand of a fixed point anchor even though it might create a FF5 or FF7 you can say - because this study found blah blah blah. 

Granted, having an actual peer reviewed journal might exclude some people, not everyone has the benefit being able to understand the scientific or mathematical jargon, or at least interpret it. This can however, be alleviated with a well written abstract and conclusion written in plain english. Not every article needs to be a quantitative study, but definitely needs to be heavily referenced. libguides.ucmerced.edu/c.ph…;p=4495850

Notice how when you came in with a very clear and concise well thought out explanation with citations everyone stopped and basically shut up and stopped bickering. The arguments of Because I don't like it! or we've always done it this way! or I was taught this way! It wasn't until then that people stopped to challenge their own assumptions. For me, this is the biggest reason for having something like this. Of course the trolls will troll, and the people who only know how to black hat will still just black hat. We can just ignore them, the rest of us reasonable people will challenge our assumptions and learn something.

Anyway, just some thoughts.

I'd be really interested to hear about what best practices in the handbook are outdated. I've been using it as reference for some writing I'm doing; would love to know what is less than ideal there!

Jim Titt · · Germany · Joined Nov 2009 · Points: 490
Gregory Cooperwrote:

Derek, thought I recognized that name! I run into Carl in the wild and now you on MP. Doesn't surprise me that I would find you posting nerdy stuff about anchors. I think there needs to be a huge push for the AMGA, or better the IFMGA to establish a peer reviewed journal where information can be shared at a central point. Access can be given to members or certified guides for free, and maybe even universities. 

If it isn't obvious by this thread that there is a strong need for scientific studies and or strong theoretical papers discussing best practices. So much of what we learn in these three disciplines is handed down to us via telephone. We trust those who teach us because they are the SEMs with acknowledging that they could be wrong, or don't have strong theoretical understanding of what they are teaching. What's worse is humans are lazy, and maybe we don't keep ourselves up to day on best practices. As someone mentioned in this thread about the John Long book some information might be really old. I even remember during the SPI course last year you and Carl mentioned that the SPI handbook is already out of date in regards to some of the best practices. Maybe a better handbook needs to be written that is really solid by the AMGA or IFMGA that serves as a base for this knowledge.) Then, new information can be published in the journals and the handbooks updated or at least reviewed every 5 years if need be). This isn't a sly against Gaines or Martin, but if the AMGA wants to call itself a leader in education, standards, and certification we really need to due better in this regard.

Another benefit something like this would be that it's central. You posted a link to an article that you wrote for Weber University. I would have never thought to look for such an article there and I'm sure a Google search would not have even pulled something like that up. However, the AMGA/IFMGA could tell all it's members: "hey, if you want to look at what's the latest and greatest just look at the website and search the Journal of Climbing Expeditionary Practices (working title) at www.amga.com/publications/journalof yada yada. Not spread out over MP, or multiple other journals, or articles on websites, or someones blog. I think Tech Tuesday on Instagram is the closet thing I've seen.

Additionally, books that are then published by the AMGA/IFMGA can finally start referencing these studies! So when a student is curious as to why it's recommended to do something or not, they can actually look up the actual science or theoretical framework. Or when a student like me, who asks you, why and when is it ok for a client to clip to one strand of a fixed point anchor even though it might create a FF5 or FF7 you can say - because this study found blah blah blah. 

Granted, having an actual peer reviewed journal might exclude some people, not everyone has the benefit being able to understand the scientific or mathematical jargon, or at least interpret it. This can however, be alleviated with a well written abstract and conclusion written in plain english. Not every article needs to be a quantitative study, but definitely needs to be heavily referenced. libguides.ucmerced.edu/c.ph…;p=4495850

Notice how when you came in with a very clear and concise well thought out explanation with citations everyone stopped and basically shut up and stopped bickering. The arguments of Because I don't like it! or we've always done it this way! or I was taught this way! It wasn't until then that people stopped to challenge their own assumptions. For me, this is the biggest reason for having something like this. Of course the trolls will troll, and the people who only know how to black hat will still just black hat. We can just ignore them, the rest of us reasonable people will challenge our assumptions and learn something.

Anyway, just some thoughts.

And who exactly is going to peer-review the papers? I've spent the last 20 years looking at well-citated bullshit and actually testing their premises and yeah, mostly they are either just plain wrong or developed conclusions that aren't justified on the evidence they present.

Gregory Cooper · · Phoenix, AZ · Joined Oct 2014 · Points: 2,967
Jim Tittwrote:

And who exactly is going to peer-review the papers? I've spent the last 20 years looking at well-citated bullshit and actually testing their premises and yeah, mostly they are either just plain wrong or developed conclusions that aren't justified on the evidence they present.

The same way all peer reviewed journals are...? It's essential a double blind process. Someone submits a paper, that paper is then sent to multiple experts in the field anonymously (PhD usually) who individually review the paper and give feedback anonymously. 

Well, I usually don't see well cited anything for the most part, and most tests are done in the "lab" and not real world conditions. 

As the sport continues to grow we need to unify our research, theory, and practice across the globe. 

PWZ · · Unknown Hometown · Joined Feb 2016 · Points: 0
M Appelquist wrote:

Let’s not shut down innovation and discussion through excess “unification”

Yes, innovation is certainly at risk what with the myriad of nitwits trying to reinvent the wheel with a macrame dreamcatcher.

Bruno Schull · · Unknown Hometown · Joined Dec 2009 · Points: 0

Hi Gregory--I think most people would agree with the general idea--the need for high quality peer reviewed studies, interpreted correctly, and communicated in a meaningful way. 

The problem is that, despite the growth of the sport, no real structure or system exists to motivate this kind of approach. I think this is what Jim Titt is suggesting (and he would know).  Who would pay for the studies?  Where would the funding come from?  Who would actually do the peer review?  What would the platform be to communciate the results?  Does the responsibility fall on individual companies?  On equipment certification organizations?  On guiding certification organizations?  On national climbing associations?  And so on.  

The truth is, it's the same in many sports. For example, think about cycling, and something like helmet safety.  There is no real centralized system, just a disorganized, constantly evolving set of standards and studies, conducted by enthusiastic doctoral students, professionals outside the sport, medical doctors, curious lay people, magazines or web publishing platforms, individual companies with commercial interests, different certification organization, and so on.  

If there was a way to implement your idea, it would probably take something like 1) a concerted effort by bodies like the IFGMA and the UIAA, 2) cooperation and collaboration between competing commercial concerns (Petzl and Black Diamond), 3) some sort of third party, like a magazine (Climbing), or an alpine club (DAV), to fund their own research and gradually establish themselves as the standard, 4) Some motivated individuals perhaps funded by the masses (How not to Highline), or some combination of all of the above, which is kind of what we have now.  I think you can appreciate that each of these possibilities is fraught with difficulties and highly unlikely. 

Maybe you should start a Journal?  You could all the "International journal of best practices in climbing and mountaineering."

Best.

Derek DeBruin · · Unknown Hometown · Joined Jul 2010 · Points: 1,129

@Greg: thanks for chiming in. As has been noted, it's certainly a worthy idea and one I'd support. Implementation becomes the challenge. In the U.S., the logical place to house a repository of studies or even a journal is the American Alpine Club, particularly since much of the publicly available data coming from Europe is similarly housed among the alpine clubs (CAI, DAV, etc.). The AAC already manages publications, is far more likely to have the available budget, doesn't suffer from direct business competition concerns, and also has a voice internationally with the UIAA (as do all the alpine clubs). However, actually getting a thing launched is a different challenge entirely. There is the International Rock Climbing Research Association and their publication, though last time I read it the focus was largely on exercise/sport science as opposed to technical systems. 

@Ricky: there's a decent list of errata for the SPI Handbook. I don't have it on hand at the moment. Your best bet would be to contact the Climbing Instructor Program Manager at the AMGA, Andrew Megas-Russell, at andrew@amga.com

rgold · · Poughkeepsie, NY · Joined Feb 2008 · Points: 526

Before we get dedicated journals and organizations collecting and promulgating the results, it sure would be nice if the AAC would embark on a program to translate the by now extensive work of ENSA, the DAV, and the CAI. That organization and those clubs have facilities and resources that will probably never be matched in the US, and we have only the most haphazard access to the substantial amount of knowledge they've already accumulated.

I've suggested this to various AAC people over the years as well as writing it in surveys about what the club should do. Nothing has ever come of it...

Connor Dobson · · Louisville, CO · Joined Dec 2017 · Points: 269
Gregory Cooperwrote:

The same way all peer reviewed journals are...? It's essential a double blind process. Someone submits a paper, that paper is then sent to multiple experts in the field anonymously (PhD usually) who individually review the paper and give feedback anonymously. 

Well, I usually don't see well cited anything for the most part, and most tests are done in the "lab" and not real world conditions. 

As the sport continues to grow we need to unify our research, theory, and practice across the globe. 

I have my PhD in anchorology from MPU, feel free to send stuff my way.

An excerpt from my thesis: 

 I'll have you know I graduated top of my class in anchor evaluation, and I've been involved in numerous secret crags, and I have over 300 confirmed quad threads. I am trained in cordellete warfare and I'm the top bolt clipper in the entire US climbing forces. You are nothing to me but just another gym climber. I will anchor you the fuck out with precision the likes of which has never been seen before on this Earth, mark my fucking words. You think you can get away with saying that shit to me over the Internet? Think again, fucker. As we speak I am contacting my secret network of AMGA guides across the USA and your IP is being traced right now so you better prepare for the storm, maggot. The storm that wipes out the pathetic little quad you call your anchor. You're fucking dead, gumby. I can be anywhere, anytime, and I can secure you in over seven hundred ways, and that's just with my bare hands. Not only am I extensively trained in unarmed climbing, but I have access to the entire arsenal of totems and I will use it to its full extent to keep your miserable ass on the face of the continent, you little shit. If only you could have known what unholy education your little "clever" rope trick was about to bring down upon you, maybe you would have held your fucking PAS. But you couldn't, you didn't, and now you're paying the price, you goddamn idiot. I will shit fury all over epinephrine and you will drown in it. You're fucking dead, kiddo.

Jim Titt · · Germany · Joined Nov 2009 · Points: 490
Gregory Cooperwrote:

The same way all peer reviewed journals are...? It's essential a double blind process. Someone submits a paper, that paper is then sent to multiple experts in the field anonymously (PhD usually) who individually review the paper and give feedback anonymously. 

Well, I usually don't see well cited anything for the most part, and most tests are done in the "lab" and not real world conditions. 

As the sport continues to grow we need to unify our research, theory, and practice across the globe. 

I know HOW peer review works, the question is WHO? You need people who know enough about the subject and are in a position to repeat or perform further experiments to confirm the original premise. I've had two papers independently confirmed and two years is good.

And we do stuff in the lab because "real life" is slow, expensive, often impossible and usually horribly dangerous. You often need to use worst case and that means identifying where the experimentees would die.

Gunkiemike · · Unknown Hometown · Joined Jul 2009 · Points: 3,732
Jim Tittwrote:

I know HOW peer review works, the question is WHO? You need people who know enough about the subject and are in a position to repeat or perform further experiments to confirm the original premise. I've had two papers independently confirmed and two years is good.

And we do stuff in the lab because "real life" is slow, expensive, often impossible and usually horribly dangerous. You often need to use worst case and that means identifying where the experimentees would die.

+1 for Jim's point about real world vs lab testing. If one is going to put a load cell and high speed data acq. on each leg of a multipoint anchor and do several, ahem, identical drops on it, that's vastly harder to do on a cliff face. And many of the physical elements involved in anchoring really don't care where they are. You want temperature and humidity control? A robust study dealing with behavior of polymeric materials under load really should, right? Try doing that outside.

And while citations are important, they are secondary to the quality of the new information provided (review articles and meta studies excepted), One of the oldest BS tricks in academia is to reference the bloody hell out of innocuous text. It might look like this: 

"For years (1) climbers have been debating the best way to construct reliable, strong anchors (2,3). Using pitons (4,5), bolts (4,6), or removeable gear (4, 7,8) presents the climber with myriad options. The well-known cordelette (6,8,9,10,11,12) and, more recently, the quad (9,10,11,13) and equalette (10,11,12,14) have been considered and weighed against the often competing demands (13,15) of a toprope or multipitch belay anchor (16)."

Lyle M · · New Haven, Ct · Joined Aug 2018 · Points: 586
Jim Tittwrote:

I know HOW peer review works, the question is WHO? You need people who know enough about the subject and are in a position to repeat or perform further experiments to confirm the original premise. I've had two papers independently confirmed and two years is good.


Hey Jim, I think you may be a bit wrong on your assumptions on how research is reviewed and published.

Generally 3-5 experts in the field most closely relating to the experiment are chosen by the research community, these reviewers also have to accept this role and usually do it to embolden their status pro Bono. They determine things like if the experiment is designed correctly and repeatable, they likely never perform any actual experiments to repeat the data, but will criticize the way it was done. Research does not have to be repeated to be published, it’s  generally accepted that we trust the peer review process to argue that the data is presentable. If a reviewer does a shotty job they will be exiled as a trusted expert through natural selection. 

Jim Titt · · Germany · Joined Nov 2009 · Points: 490

I've a reasonable idea how peer review works in academia, my wife is on a medical review panel. In the climbing literature world the first thing you do is check the experimental results by repeating, there's no previous literature to rely on.

Lyle M · · New Haven, Ct · Joined Aug 2018 · Points: 586

I’m not convinced you need previous journals if experts are selected to review the data.


edit: I’m also not convinced every published result needs to be repeated to be trusted. The argument I’m reading from you is that we can’t have a journal because no journals have existed before and any data collected and reviewed by our peers should be deemed inadequate. It just sounds like a lot of excuses not to start with a accepted but flawed product, even established journals and science are inherently flawed.

rgold · · Poughkeepsie, NY · Joined Feb 2008 · Points: 526

There are different sources for confidence (or lack of it) in testing, and expert review can address some but not all of the issues  A situation that is highly problematic occurs when the phenomenon in question has a lot of variation,  Small samples can easily miss this variation, and so provide conclusions that would not be justified by more extensive trials.  The vast majority of climbing tests I've seen suffer from this potentially fatal flaw,  Sometimes, tests give nearly contradictory results, as has happened with the ENSA testing of the in-line figure 8, finding it as good or better in roll resistance to the inline overhand (EDK), in spite of other testing giving the opposite result.  Is this a result of inherent variability in the knot itself, or is there something about the testing protocols that produce different results?  (I've always wondered whether the loading velocity of the slow-pull tests affects the results, for example.)  Of course, funding larger numbers of trials is problematic too, as breaking expensive gear is a feature of such tests.

In other cases, I think expert review might identify either flawed testing procedures or, perhaps more frequently, a mismatch between what was actually tested and the claims made for the results. Various tests prematurely proclaiming that "extension is a myth" come to mind...

Jim Titt · · Germany · Joined Nov 2009 · Points: 490
Lyle Mwrote:

I’m not convinced you need previous journals if experts are selected to review the data.


edit: I’m also not convinced every published result needs to be repeated to be trusted. The argument I’m reading from you is that we can’t have a journal because no journals have existed before and any data collected and reviewed by our peers should be deemed inadequate. It just sounds like a lot of excuses not to start with a accepted but flawed product, even established journals and science are inherently flawed.

There's already the Journal ofSports Engineering and Technology but they probably set a higher standard than most climbing literature!

Unfortunately it is usually nescessary to test some of the proposals, many researchers are prone to less than rigorous exploration of the range of possibilities, making assumptions based on limited results or even inventing effects to allow results which disagree to be discarded.

Gregory Cooper · · Phoenix, AZ · Joined Oct 2014 · Points: 2,967
M Appelquist wrote:

Let’s not shut down innovation and discussion through excess “unification”

I think you completely miss the point of what I'm saying. It doesn't shut down innovation. It does the exact opposite. Studies are used to discover new things and report findings across borders. That's the entire purpose. And unification is extremely important when it comes to education, especially when we are educating guides who do dangerous things with clients, or even recreational climbers/skiers which many things overlap. We don't want people doing things each their own way, that could cause confusion when shit hits the fan. Unification also means having a reliable source that people from anywhere can find, and cite if they need to. 

It sounds like you are not familiar with the scientific process, or the field of education. That's fine, but please inform yourself or do a little research before trying to make point in a discussion about just that. At least ask for clarification with anything that I'm suggesting so we can have a proper discussion.

Gregory Cooper · · Phoenix, AZ · Joined Oct 2014 · Points: 2,967
Connor Dobsonwrote:

I have my PhD in anchorology from MPU, feel free to send stuff my way.

An excerpt from my thesis: 

 I'll have you know I graduated top of my class in anchor evaluation, and I've been involved in numerous secret crags, and I have over 300 confirmed quad threads. I am trained in cordellete warfare and I'm the top bolt clipper in the entire US climbing forces. You are nothing to me but just another gym climber. I will anchor you the fuck out with precision the likes of which has never been seen before on this Earth, mark my fucking words. You think you can get away with saying that shit to me over the Internet? Think again, fucker. As we speak I am contacting my secret network of AMGA guides across the USA and your IP is being traced right now so you better prepare for the storm, maggot. The storm that wipes out the pathetic little quad you call your anchor. You're fucking dead, gumby. I can be anywhere, anytime, and I can secure you in over seven hundred ways, and that's just with my bare hands. Not only am I extensively trained in unarmed climbing, but I have access to the entire arsenal of totems and I will use it to its full extent to keep your miserable ass on the face of the continent, you little shit. If only you could have known what unholy education your little "clever" rope trick was about to bring down upon you, maybe you would have held your fucking PAS. But you couldn't, you didn't, and now you're paying the price, you goddamn idiot. I will shit fury all over epinephrine and you will drown in it. You're fucking dead, kiddo.

  That's the first time I've see this copy pasta used in a climbing setting. you should post this to r/climbingcirclejerk

Gregory Cooper · · Phoenix, AZ · Joined Oct 2014 · Points: 2,967
Derek DeBruinwrote:

@Greg: thanks for chiming in. As has been noted, it's certainly a worthy idea and one I'd support. Implementation becomes the challenge. In the U.S., the logical place to house a repository of studies or even a journal is the American Alpine Club, particularly since much of the publicly available data coming from Europe is similarly housed among the alpine clubs (CAI, DAV, etc.). The AAC already manages publications, is far more likely to have the available budget, doesn't suffer from direct business competition concerns, and also has a voice internationally with the UIAA (as do all the alpine clubs). However, actually getting a thing launched is a different challenge entirely. There is the International Rock Climbing Research Association and their publication, though last time I read it the focus was largely on exercise/sport science as opposed to technical systems. 

@Ricky: there's a decent list of errata for the SPI Handbook. I don't have it on hand at the moment. Your best bet would be to contact the Climbing Instructor Program Manager at the AMGA, Andrew Megas-Russell, at andrew@amga.com

Derek, when you read this it's going to sound odd because I'm not actually directly responding to you but really more for everyone else who is piping in, and also it's responding to other people's posts that I'm not responding to directly.

Actually, the American Alpine Club was one of my first thoughts! They already have the Accidents of North America publication. As for other places universities would be the next logical source, at least for some of the qualitative or quantitative studies. There is also the UIAA and CE. So much of the standards for at least safety is there, and hopefully the testing methods should be for any company that makes gear. Say 120cm nylon slings. although, we would have to worry about conflicts of interests. Universities could do the independent testing for strengths. Given that, IFMGA guides or other Guides like RockSki eat.. Guides could then use that information to formulate best practices, experiment with new ways of doing things. Then show then to other IFMGA guides and they fiddle around with them. Then they report back to the IFMGA. the IFMGA makes recommendations based off of testing, peer review, and field reports. This recommendation can then be published in the journal (not all article are quan/qual studies*) or even in a well cited, well documented, well written, endorsed white paper, which then becomes part of the world wide resource in the "IFMGA Manual of best practices for Rock Climbing" (working title).

An example would be about the figure-eight follow through (IFMGA ISO 1234). Yeah, we should standardize knot names universally across the board too. The paper could say something along the lines of based off the evidence of strength to other knots (citation(s)), it's ease of tying (citation(s)), It's ease of inspection (citation(s)), and the failure rate of incidents in the field of .00035%** (citation(s)) we recommend that all IFMGA certified organizations and guides use the figure-eight follow through for both person (recreation) and professional (guided) settings in tying into the end of the rope for rock climbs when top roping or leading climbing in single and multi-pitch settings. NOTE: I know there are probably exceptions to this and I'm also specifically speaking about rock climbing only not ski, alpine, mountaineering, or other specialities. I'm also aware of the double bowline which could also be compared and contrasted with in the the article. 

In the article/white paper it could then describe how to tie it properly, and explain why (with citations). Along with other pertinent information. Even a history of the use of the knot would be beneficial so we can get a more holistic understanding of why we are doing what we do. The paper could contain comparisons of other tie in methods using the same metrics. etc.... 

The best thing about this, is once it's published it's out there for the climbing community and stakeholders to see. Maybe in 15 years time we develop new materials for ropes that are far superior then the ones we have. However, due to the nature of the matierials the figure eight is no longer best practice. A new article explaining that the old method is gone (thus everyone will become aware of it because it's on one place in an endorsed and peer reviewed journal/white paper. 

Even better yet, after this, then we can create a white paper/article that then explains the best way to teach the not to different kinds of groups! and it can cite that article! Science and education for the Win!

****************

It simply sounds like we need to get stakeholders talking more. AAI, IFMGA, AMGA, Clubs, other associations foreign and domestic. Cross pollination, resource sharing, standards sharing etc.. to pull this off. It'll be a goal, but I feel it would be worth it for both the professionals and the recreational climbers. We need a place where we can share ideas and challenge our assumptions and to give a strong theoretical base to the things we do. And as others have said, and which I mention in another place, real world testing needs to be a huge part of this. At best so we don't hurt real people, crash test dummies should be used to simulate many situations. We know a 80Kg weigh will break a dynemma sling at FF2, but when there is an anchor, a belayer, a rope, and terrain it's a different story (maybe) Other than funding I think this is one of the biggest cruxes about research in climbing. We need to see more of it.

I think I've seen that publication. There was an article done by a French university about the science of sport competition on-sighting or something.

* For people who are not as familiar 

** completely made of statistic to make a point. we all know the figure eight is very safe, but the citation would back that up by science. Let's hope it's even smaller of a % than that!

Guideline #1: Don't be a jerk.

General Climbing
Post a Reply to "Quad anchor is not redundant at the clip in point?"

Log In to Reply
Welcome

Join the Community! It's FREE

Already have an account? Login to close this notice.