StillAnotherGuest wrote:-SWS wrote: The next paradigm shift I performed was to place myself in the role of the authors of the APAP bench-testing study. I decided their motives were pure...
I think it's much more than that, -SWS... <friendly snippyroo> ...Further, it seems a common theme in both articles and the editorial is that the algorithms should be shared with the medical community so that if APAP therapy is selected, then an informed decision can be made initially.
I never did get around to sharing exactly
why I thought the authors' motives were valid, SAG. It was nice of you to connect the dots. But I think you initially connected them up wrong relative to my own statement. I have also quoted your own subsequent valid statement about manufacturer disclosure, because
that is the statement that belongs with my own words that I think you may have inferentially interpreted out of context.
But I intentionally didn't clarify my statement, because that post of mine was aimed only at the topic of group problem solving (or information analysis) rather than the technical specifics of the bench-study presently being discussed.
SAG wrote:I mean, if the waveform analysis and event response of these machines is fixed, then sending in fixed signals for machine comparison is an absolutely valid methodology to analyze those characteristics. Waveform analysis really isn't that hard to do, and the major differences between machines will occur in the definition of what a flow limitation is and what the response algorithms are.
In my opinion it's valid methodology regarding only event detection itself (and then with some very major black-box qualifying/disqualifying considerations that I won't get into here). However, I don't think the approach is valid or accurate regarding either the end-to-end analysis of any APAP system or the
overall efficacy of any one or group of APAP machines. And if this is not a suitable methodology for either of those two purposes, then what purpose does that methodology serve other than
declaring a serious need for disclosure by manufacturers?
Doug brought up the entirely valid point much earlier in this thread that this methodology also serves to make a dent in the overall problem of devising suitable efficacy related bench-testing.
But getting on to the low-level specifics once again... The biggest problem I have with this methodology as a benchmarking tool toward making overall efficacy comparisons, is that the methodology involves
only flow-limitation detection and short-term algorithmic response. The methodology cannot and does not factor in certain key algorithmic-crucial events that are absolutely fundamental to the overall software architecture of any APAP system design.
As an example, snore detection is used on an entirely prophylactic basis for
all the APAP algorithms sitting on that test bench. However, the testing methodology currently up for discussion does not even
attempt to factor in this key prophylactic breathing signal. The methodology itself neglects what is perhaps the most important patient-based input signal.
If any system's key input signal(s) are methodologically neglected, the system's design becomes inadvertently subverted, and the system's output is virtually guaranteed to become skewed. And that's exactly what happened with these system-output-oriented test results. But this is only one example of why breaking the crucial patient-to-machine feedback loop renders a methodology inadequate toward either assessing end-to-end system design or toward making overall APAP efficacy comparisons.
The methodology up for discussion also fails to factor in several other patient-based key inputs. Another example that comes to mind would be the experimental pressure-deltas that some algorithms introduce. The Respironics algorithm, for example, will occasionally drop then raise delivered pressure to gauge patient response (another missing patient-based input here) toward prophylactically calculating a new optimum therapeutic pressure. Conversely the PB algorithm will occasionally raise then drop pressure toward making similar prophylactic calculations under certain IFL configurations. Again, this bench-testing methodology cannot yield results suitable toward assessing any of these systems' designed outputs because that crucial patient-to-machine feedback loop has been broken.
SAG wrote:So if you want to go from the bench to the bedroom and really judge treatment effectiveness, you'll either have to get PSG on APAP...
Disregarding practicality considerations I agree with this principal on two bases. First you might get individual patients paired with APAP and PSG to judge overall resulting efficacy (in addition to comparing individual event response itself, which I would personally view as a subordinate objective toward overall efficacy assessment). Secondly, you might accumulate epidemiological data relative to each APAP algorithm. I think practicality supersedes in this latter case especially, since APAP technology will probably change (in steady increments and periodic leaps) faster than epidemiology can keep up. Epidemiology related to pharmaceuticals probably suffers this problem to a lesser degree, since individual pharmaceutical solutions tend to have a longer product cycle. I think pharmaceutical solutions often tend to supplement each other in the market place rather than suddenly displace. By contrast, APAP models tend to get more rapidly displaced by new models rather than being supplemented as long-term alternatives. Again, the APAP technology changes in both incremental leaps and occasional vast or enormous bounds. A10, for instance, has been out for quite some time, but it has also been incrementally evolving all along.
SAG wrote:or wait for the next generation of "Smart Machines", where airway stability, desaturation and arousal identification will be incorporated, and/or there will be an ability to tailor response to flow limitation (kinda like an infinitely adjustable IFL1).
Yeah! Unfortunately real-time system resources are perhaps the biggest contemporary design constraint toward achieving an ideal, all-encompassing APAP algorithm. But contemporary design constraints tend to become obsolete as well.