An APAP Shootout (sort of) on Academic Journal

General Discussion on any topic relating to CPAP and/or Sleep Apnea.
-SWS
Posts: 5301
Joined: Tue Jan 11, 2005 7:06 pm

Post by -SWS » Thu Jan 18, 2007 1:47 am

I sure do appreciate the complements! That's a $100 check in the mail to drbandage and Rested Gal, exactly as promised.
drbandage wrote:That is why I cannot believe he could be so wrong in suspecting that he would not make an absolutely wonderful physician!
Unfortunately they never let the guy sitting in the back row with the pea shooter into med school.


This thread has been very interesting. However, when I reached the point when I thought I was done interleaving ideas with Doug and others, something didn't seem quite right, analytically. Something didn't seem right with my own view that is. So I had to perform a paradigm shift, which is one of my favorite problem-solving tactics. I actually performed several paradigm shifts. The first paradigm shift was to assume Doug's viewpoints or perspective. At that point I needed to see just how many of Doug's ideas I might toss out or invalidate (exactly as I routinely toss out or invalidate my own ideas when attempting to formulate an understanding of any highly complex topic). At that point in my own analytical exercise I felt that most of Doug's opinions were quite valid.

A quick check was in order next, to see if I wanted to toss out or invalidate any of my own original opinions. I found that I wanted to keep almost all of them, but preferred to modify some of those to fit my newfound perspective. The next paradigm shift I performed was to place myself in the role of the authors of the APAP bench-testing study. I decided their motives were pure, although I wasn't pleased or confident that the study's results would be correctly used or interpreted by others. Again, I found some new perspective to consider resulting from that paradigm shift. Once again a quick check was in order, to see if I wanted to toss out or invalidate any of my own original opinions. Once again, I felt that I should keep them.

My conclusion? The various germane points presented in this thread were not necessarily diametrically opposed. That means that they very likely need(ed) to be interleaved into a more unified perspective. We were initially like the blind men trying to decipher the elephant. This issue was another one of those highly complex elephants. I would urge everyone involved in this debate to perform the analytical exercise of paradigm shifts. .


User avatar
rested gal
Posts: 12881
Joined: Thu Sep 09, 2004 10:14 pm
Location: Tennessee

Post by rested gal » Thu Jan 18, 2007 2:27 am

-SWS wrote:I sure do appreciate the complements! That's a $100 check in the mail to drbandage and Rested Gal, exactly as promised.
A small box of chocolates will do for me.
ResMed S9 VPAP Auto (ASV)
Humidifier: Integrated + Climate Control hose
Mask: Aeiomed Headrest (deconstructed, with homemade straps
3M painters tape over mouth
ALL LINKS by rested gal:
viewtopic.php?t=17435

User avatar
SamCurt
Posts: 20
Joined: Sat Jan 06, 2007 10:15 am

Post by SamCurt » Thu Jan 18, 2007 4:44 am

Hey all, I wonder if I should summarize that other test, in which I think is more sophisticated (but still iron lung ) than this, and if yes, post in this thread or a new one.

User avatar
StillAnotherGuest
Posts: 1005
Joined: Sun Sep 24, 2006 6:43 pm

They're Great Studies

Post by StillAnotherGuest » Thu Jan 18, 2007 5:07 am

-SWS wrote: The next paradigm shift I performed was to place myself in the role of the authors of the APAP bench-testing study. I decided their motives were pure...
I think it's much more than that, -SWS. I mean, if the waveform analysis and event response of these machines is fixed, then sending in fixed signals for machine comparison is an absolutely valid methodology to analyze those characteristics. Waveform analysis really isn't that hard to do, and the major differences between machines will occur in the definition of what a flow limitation is and what the response algorithms are.

Further, it seems a common theme in both articles and the editorial is that the algorithms should be shared with the medical community so that if APAP therapy is selected, then an informed decision can be made initially. APAP selection by trial and error is an extremely costly and time-consuming process. And that's also assuming that failure to improve with APAP is a fault of the algorithm, and not some other issue (which is probably more likely in the great majority of cases).

And towards that end, a significant point is made in the editorial by Dr. Lee K. Brown, Autotitrating CPAP: How Shall We Judge Safety and Efficacy of a “Black Box”? Chest 2006;130;312-314 where he points out
An intermediate approach (to determine treatment effectiveness) might be to depend on the quantification of respiratory events by the device itself, which is a form of circular reasoning that does not seem very appealing.
Right. Are you treating patients or making good-looking numbers.

So if you want to go from the bench to the bedroom and really judge treatment effectiveness, you'll either have to get PSG on APAP or wait for the next generation of "Smart Machines", where airway stability, desaturation and arousal identification will be incorporated, and/or there will be an ability to tailor response to flow limitation (kinda like an infinitely adjustable IFL1).
SAG

Image

Aromatherapy may help CPAP compliance. Lavender, Mandarin, Chamomile, and Sweet Marjoram aid in relaxation and sleep. Nature's Gift has these and a blend of all four called SleepEase.

User avatar
SamCurt
Posts: 20
Joined: Sat Jan 06, 2007 10:15 am

Re: They're Great Studies

Post by SamCurt » Thu Jan 18, 2007 6:28 am

StillAnotherGuest wrote: So if you want to go from the bench to the bedroom and really judge treatment effectiveness, you'll either have to get PSG on APAP
SAG
This has also been called for in the unsummarized test paper which asked for somthing more than flow rate be measured. In that paper, it has shown most APAPs increased pressure when there is nonobstructive apnea.


User avatar
drbandage
Posts: 223
Joined: Tue Dec 19, 2006 1:42 am
Location: is everything . . .

Post by drbandage » Thu Jan 18, 2007 10:54 am

rested gal wrote:
-SWS wrote:I sure do appreciate the complements! That's a $100 check in the mail to drbandage and Rested Gal, exactly as promised.
A small box of chocolates will do for me.
Ummm . . . then, could I have her check, too?
Dead Tired? Maybe you're sleeping with the Enemy.
Know Your Snore Score.

-SWS
Posts: 5301
Joined: Tue Jan 11, 2005 7:06 pm

Re: They're Great Studies

Post by -SWS » Thu Jan 18, 2007 11:32 am

StillAnotherGuest wrote:
-SWS wrote: The next paradigm shift I performed was to place myself in the role of the authors of the APAP bench-testing study. I decided their motives were pure...
I think it's much more than that, -SWS... <friendly snippyroo> ...Further, it seems a common theme in both articles and the editorial is that the algorithms should be shared with the medical community so that if APAP therapy is selected, then an informed decision can be made initially.

I never did get around to sharing exactly why I thought the authors' motives were valid, SAG. It was nice of you to connect the dots. But I think you initially connected them up wrong relative to my own statement. I have also quoted your own subsequent valid statement about manufacturer disclosure, because that is the statement that belongs with my own words that I think you may have inferentially interpreted out of context.

But I intentionally didn't clarify my statement, because that post of mine was aimed only at the topic of group problem solving (or information analysis) rather than the technical specifics of the bench-study presently being discussed.
SAG wrote:I mean, if the waveform analysis and event response of these machines is fixed, then sending in fixed signals for machine comparison is an absolutely valid methodology to analyze those characteristics. Waveform analysis really isn't that hard to do, and the major differences between machines will occur in the definition of what a flow limitation is and what the response algorithms are.
In my opinion it's valid methodology regarding only event detection itself (and then with some very major black-box qualifying/disqualifying considerations that I won't get into here). However, I don't think the approach is valid or accurate regarding either the end-to-end analysis of any APAP system or the overall efficacy of any one or group of APAP machines. And if this is not a suitable methodology for either of those two purposes, then what purpose does that methodology serve other than declaring a serious need for disclosure by manufacturers? Doug brought up the entirely valid point much earlier in this thread that this methodology also serves to make a dent in the overall problem of devising suitable efficacy related bench-testing.

But getting on to the low-level specifics once again... The biggest problem I have with this methodology as a benchmarking tool toward making overall efficacy comparisons, is that the methodology involves only flow-limitation detection and short-term algorithmic response. The methodology cannot and does not factor in certain key algorithmic-crucial events that are absolutely fundamental to the overall software architecture of any APAP system design.

As an example, snore detection is used on an entirely prophylactic basis for all the APAP algorithms sitting on that test bench. However, the testing methodology currently up for discussion does not even attempt to factor in this key prophylactic breathing signal. The methodology itself neglects what is perhaps the most important patient-based input signal. If any system's key input signal(s) are methodologically neglected, the system's design becomes inadvertently subverted, and the system's output is virtually guaranteed to become skewed. And that's exactly what happened with these system-output-oriented test results. But this is only one example of why breaking the crucial patient-to-machine feedback loop renders a methodology inadequate toward either assessing end-to-end system design or toward making overall APAP efficacy comparisons.

The methodology up for discussion also fails to factor in several other patient-based key inputs. Another example that comes to mind would be the experimental pressure-deltas that some algorithms introduce. The Respironics algorithm, for example, will occasionally drop then raise delivered pressure to gauge patient response (another missing patient-based input here) toward prophylactically calculating a new optimum therapeutic pressure. Conversely the PB algorithm will occasionally raise then drop pressure toward making similar prophylactic calculations under certain IFL configurations. Again, this bench-testing methodology cannot yield results suitable toward assessing any of these systems' designed outputs because that crucial patient-to-machine feedback loop has been broken.
SAG wrote:So if you want to go from the bench to the bedroom and really judge treatment effectiveness, you'll either have to get PSG on APAP...
Disregarding practicality considerations I agree with this principal on two bases. First you might get individual patients paired with APAP and PSG to judge overall resulting efficacy (in addition to comparing individual event response itself, which I would personally view as a subordinate objective toward overall efficacy assessment). Secondly, you might accumulate epidemiological data relative to each APAP algorithm. I think practicality supersedes in this latter case especially, since APAP technology will probably change (in steady increments and periodic leaps) faster than epidemiology can keep up. Epidemiology related to pharmaceuticals probably suffers this problem to a lesser degree, since individual pharmaceutical solutions tend to have a longer product cycle. I think pharmaceutical solutions often tend to supplement each other in the market place rather than suddenly displace. By contrast, APAP models tend to get more rapidly displaced by new models rather than being supplemented as long-term alternatives. Again, the APAP technology changes in both incremental leaps and occasional vast or enormous bounds. A10, for instance, has been out for quite some time, but it has also been incrementally evolving all along.
SAG wrote:or wait for the next generation of "Smart Machines", where airway stability, desaturation and arousal identification will be incorporated, and/or there will be an ability to tailor response to flow limitation (kinda like an infinitely adjustable IFL1).
Yeah! Unfortunately real-time system resources are perhaps the biggest contemporary design constraint toward achieving an ideal, all-encompassing APAP algorithm. But contemporary design constraints tend to become obsolete as well.


User avatar
dsm
Posts: 6996
Joined: Mon Jun 20, 2005 6:53 am
Location: Near the coast.

Post by dsm » Thu Jan 18, 2007 4:07 pm

SWS,

I want to have a go at summarizing your points as I understand them, as well as what I understand the research project was trying to achieve. This may turn out to be an over simplification but I'll try ...

- The researchers wanted to test certain reactions of Autos.

- The decided to do so in part by using a set of industry standard breathing patterns (I'll call these the R-- patterns) that replicate previously documented clinical data taken from 'control' patients and accepted in the profession

- They also needed & built a basic machine to generate the breathing & fed this into a unit that applied the R-- patterns to the airflow

- They then set out to measure specific reactions of each machine to the R-- patterns. In some cases they varied the output from the generator, (but I can't recall if they mixed any of the patterns)

- The researchers motives were likely threefold (my opinion coming out here)
1) To try to come up with some clinical understanding of what these machines could do based on the claims of the manufacturers
2) To put a shot across the bow of the manufacturers over their repeated refusal to provide researchers with more details of their algorithms such that more detailed clinical analysis can be designed & performed
3) To offer respiratory professionals the findings of the researchers to aid them in deployment of Apaps

*************

Your issues with their approach include ...

- Targetting specific aspects of the machine one at a time, in isolation may show up apparently unsatisfactory responses to that one factor but is ignoring the holistic reaction of the machine

- Only 'holistic' testing (a complete loop machine to patient) of an Apap is meaningful and it seems that good holistic tests & methodologies have not yet been devised (see next point as specific clarification)

- That the biggest flaw in the methodology is that the test apparatus does not take feedback from the Auto which in normal use would then change the respiratory pattern from the patient (the loop)

- In particular the Respironics non-responsive patient algorithm appears to have been ignored and the machine maligned unfairly in one of the tests because of the lack of a testing loop

- That the percieved deficiencies in the tests negate the whole testing process

*************

DSM

_________________

CPAPopedia Keywords Contained In This Post (Click For Definition): respironics, APAP

_________________

CPAPopedia Keywords Contained In This Post (Click For Definition): respironics, auto, APAP

xPAP and Quattro std mask (plus a pad-a-cheek anti-leak strap)

User avatar
rested gal
Posts: 12881
Joined: Thu Sep 09, 2004 10:14 pm
Location: Tennessee

Post by rested gal » Thu Jan 18, 2007 5:11 pm

dsm wrote:- The researchers motives were likely threefold (my opinion coming out here)

---

3) To offer respiratory professionals the findings of the researchers to aid them in deployment of Apaps
And that very point you mention...because it's unfortunately likely to be what many doctors as well as respiratory professionals will zero in on when looking at those tests (how did machine A behave compared to machine B, C, D, etc.)... is why tests set up in such a way -- breaking the dynamic breathing human feedback loop --are not a good idea, imho. False assumptions about each machine are going to come out of such a test.

Believe it or not, I feel just as strongly about that no matter which brand of autopap does, or does not, do this/that when presented with the waveforms, with no human feedback to give the machines an indication what to do next. It's a meaningless (imho) snapshot of what all those autopaps actually would do in actual therapy situations.

Whether or not your point 3 was any part of the study's objectives, I think looking at those tests will indeed influence health care professionals in making a choice among autopaps, or even make them reject autopaps in general as effective treatment machines -- for no valid reason.

On the good side, I do believe most OSA sufferers can be well treated by any autopap set properly. Just my opinion.
ResMed S9 VPAP Auto (ASV)
Humidifier: Integrated + Climate Control hose
Mask: Aeiomed Headrest (deconstructed, with homemade straps
3M painters tape over mouth
ALL LINKS by rested gal:
viewtopic.php?t=17435

User avatar
dsm
Posts: 6996
Joined: Mon Jun 20, 2005 6:53 am
Location: Near the coast.

Post by dsm » Thu Jan 18, 2007 5:40 pm

RG,

Your point does cut to the quick

The researchers though stated in their conclusions that lack of adequate input from the manufacturers was preventing them from devising more detailed tests.

They also state quite clearly that Autos are maturing and proving to be very effective for some people in deployment.

But, their point is they are hamstrung to some extent by the withholding of data by the manufacturers & like I said, I saw their conclusion as a shot across the bow to the manufacturers and the fact that respriatory professionals will read their research reports is going to put the heat on the manufacturers.

I come back to the very forceful point I made about Dr Joffe - one of our country's leading practitioners in the respiratory field, and his reading of the report & him seeing fit to pass it on to an interested patient. I have no doubt that is happening around the world.

I still have an open mind on testing specific algorithmic behaviours of an auto vs attemting to establish 'holistic' tests (we have been down that path already re the issue of using humans to conduct research allowing for the value that one persons results bring in terms of the whole community)

I have no doubt that a clinical trial will produce the type of result you are seeking - but we can't ignore the medical research community who get funded by governements & other research bodies, to investigate just what a medical device really does in relation to the claims made about it.

DSM

xPAP and Quattro std mask (plus a pad-a-cheek anti-leak strap)

User avatar
rested gal
Posts: 12881
Joined: Thu Sep 09, 2004 10:14 pm
Location: Tennessee

Post by rested gal » Thu Jan 18, 2007 6:37 pm

dsm wrote:I have no doubt that a clinical trial will produce the type of result you are seeking
Actually I've never said I was seeking any "result" if you mean autopap treatment comparisons or algorithm comparisons. I've not been wondering how autopaps compare to each other at all. I've simply been commenting on what I thought of this particular type of test. As I've already stated I believe most people with OSA can be well treated by any autopap set properly. My point in this thread all along has been this, and only this...hooking autopaps up to an artificial breathing machine gives useless comparison results, imho.
dsm wrote:- but we can't ignore the medical research community who get funded by governements & other research bodies, to investigate just what a medical device really does in relation to the claims made about it.
Oh, I'm not ignoring them. I'm saying what I think about this particular "investigation." In my opinion, they spent a lot of time and money on an "investigation" that could not in any way shed any meaningful light on "what a medical device really does in relation to claims made about it."

I'm not ignoring the medical research community. Just saying what I, purely as a layperson and autopap user, think about the lack of usefulness of this particular study.
ResMed S9 VPAP Auto (ASV)
Humidifier: Integrated + Climate Control hose
Mask: Aeiomed Headrest (deconstructed, with homemade straps
3M painters tape over mouth
ALL LINKS by rested gal:
viewtopic.php?t=17435

-SWS
Posts: 5301
Joined: Tue Jan 11, 2005 7:06 pm

Post by -SWS » Thu Jan 18, 2007 6:49 pm

dsm wrote:I want to have a go at summarizing your points as I understand them, as well as what I understand the research project was trying to achieve. This may turn out to be an over simplification but I'll try...
- The researchers wanted to test certain reactions of Autos.
- The decided to do so in part by using a set of industry standard breathing patterns (I'll call these the R-- patterns) that replicate previously documented clinical data taken from 'control' patients and accepted in the profession
- They also needed & built a basic machine to generate the breathing & fed this into a unit that applied the R-- patterns to the airflow
- They then set out to measure specific reactions of each machine to the R-- patterns. In some cases they varied the output from the generator, (but I can't recall if they mixed any of the patterns)
Sounds good enough.
dsm wrote:- The researchers motives were likely threefold (my opinion coming out here)
1) To try to come up with some clinical understanding of what these machines could do based on the claims of the manufacturers
2) To put a shot across the bow of the manufacturers over their repeated refusal to provide researchers with more details of their algorithms such that more detailed clinical analysis can be designed & performed
3) To offer respiratory professionals the findings of the researchers to aid them in deployment of Apaps
My own guess is that item number two is the real salient motive here. But that guess may be biased since I don't think the methodology can accomplish objective's one and three
dsm wrote:- Targeting specific aspects of the machine one at a time, in isolation may show up apparently unsatisfactory responses to that one factor but is ignoring the holistic reaction of the machine
I believe there are many cases in system analysis where you can test outputs relative to individual inputs---then devise quantitative means to summarily predict the output (and hence efficacy in this case). But until you have sufficient patient-signal inputs that are absolutely key to efficacy, then you cannot quantitatively predict that system's output in any meaningful way, IMHO.
dsm wrote: - Only 'holistic' testing (a complete loop machine to patient) of an Apap is meaningful and it seems that good holistic tests & methodologies have not yet been devised (see next point as specific clarification)
I don't think purely holistic testing is always necessary. But patient airflow is really a composite signal with multiple events or kinds of key information buried in that single data channel. Flow limitation (simulated via R-value inputs) is the only patient-related signal this methodology injects.
dsm wrote:- That the biggest flaw in the methodology is that the test apparatus does not take feedback from the Auto which in normal use would then change the respiratory pattern from the patient (the loop)

In particular the Respironics non-responsive patient algorithm appears to have been ignored and the machine maligned unfairly in one of the tests because of the lack of a testing loop
In a nutshell, these two salient factors are remiss in my opinion: 1) the composite patient signal has been stripped of all but one essential component, and 2) A two-way patient-feedback loop between man and machine has been neither preserved nor simulated. Snore in particular is perhaps the single most important missing input. But collectively the missing composite patient-event inputs (some relying on a two-way system feedback loop, some not) are of far greater magnitude than any single deficiency in this methodology.
dsm wrote:- That the percieved deficiencies in the tests negate the whole testing process
- Negate toward demonstrating that manufacturer discloser is seriously needed? No.

- Negate toward making progress at devising a bench-testing methodology that can one day yield suitable efficacy information? No.

- Negate toward yielding adequate efficacy information for physicians today? Yes. I think the system output is too heavily skewed for lack of too many missing patient-based events and signals buried in that composite APAP data channel.


User avatar
dsm
Posts: 6996
Joined: Mon Jun 20, 2005 6:53 am
Location: Near the coast.

Post by dsm » Thu Jan 18, 2007 6:55 pm

rested gal wrote: <snip>
I'm not ignoring the medical research community. Just saying what I, purely as a layperson and autopap user, think about the lack of usefulness of this particular study.

RestedGal,

I am still not sure I know what parts of the 'the study' you see as faulty. Is there a particular test or tests that you can see as flawed ?

Also what do you think of SAG's point here (est highlighted bit) ...

*********************************************
Further, it seems a common theme in both articles and the editorial is that the algorithms should be shared with the medical community so that if APAP therapy is selected, then an informed decision can be made initially. APAP selection by trial and error is an extremely costly and time-consuming process. And that's also assuming that failure to improve with APAP is a fault of the algorithm, and not some other issue (which is probably more likely in the great majority of cases).

And towards that end, a significant point is made in the editorial by Dr. Lee K. Brown, Autotitrating CPAP: How Shall We Judge Safety and Efficacy of a “Black Box”? Chest 2006;130;312-314 where he points out
An intermediate approach (to determine treatment effectiveness) might be to depend on the quantification of respiratory events by the device itself, which is a form of circular reasoning that does not seem very appealing.
Right. Are you treating patients or making good-looking numbers.
*****************************************************

Tks D

Last edited by dsm on Thu Jan 18, 2007 7:09 pm, edited 1 time in total.
xPAP and Quattro std mask (plus a pad-a-cheek anti-leak strap)

-SWS
Posts: 5301
Joined: Tue Jan 11, 2005 7:06 pm

Post by -SWS » Thu Jan 18, 2007 7:03 pm

I almost accidentally deleted my previous post when I was trying to copy/paste the following text:
dsm wrote:- That the percieved deficiencies in the tests negate the whole testing process
-SWS wrote:- Negate toward yielding adequate efficacy information for physicians today? Yes. I think the system output is too heavily skewed for lack of too many missing patient-based events and signals buried in that composite APAP data channel.
Doug, SAG, Rested Gal, et al- There has to be at least some useful information buried in these test results that today's clinicians can take advantage of. Would anyone mind taking and expanding on the opposite stance that I have taken on this point (in addition to any other points or ideas you favor, of course)? I'll help explore the opposite stance further, but I think SAG or Doug in particular may have some ideas they've already explored here. Anyone? Thanks.

Last edited by -SWS on Thu Jan 18, 2007 7:11 pm, edited 3 times in total.

User avatar
dsm
Posts: 6996
Joined: Mon Jun 20, 2005 6:53 am
Location: Near the coast.

Post by dsm » Thu Jan 18, 2007 7:04 pm

SamCurt wrote:Hey all, I wonder if I should summarize that other test, in which I think is more sophisticated (but still iron lung ) than this, and if yes, post in this thread or a new one.
If you have another report yse do create a new thread. I am more than sure we are all learning from debate on these research reports.

If you have softcopy of any recent reports I would very much appreciate an emailed copy - perhaps you have PDF versions ?

Pls PM me

Tks & yes - go for it

DSM
xPAP and Quattro std mask (plus a pad-a-cheek anti-leak strap)