Finally slept through the whole night....

General Discussion on any topic relating to CPAP and/or Sleep Apnea.
-SWS
Posts: 5301
Joined: Tue Jan 11, 2005 7:06 pm

Study Design

Post by -SWS » Sat Mar 05, 2005 8:33 pm

I don't blame any unsuspecting doctor looking at that study for being leary of AutoPAPS. The study purports to partially clarify (older) AutoPAPs, yet ironically causes more confusion than anything else in my opinion.

I contend that only because most AutoPAPs will: 1) sense a sleep event, 2) administer an initial pressure-response based on that event, then 3) calculate patient airflow to determine required subsequent pressure adjustments. That study fails to show any patient response whatsoever to any of those initial pressure increments. That study also fails to accurately simulate pressure incremental reiterations that are based on patient feedback. The study only plays a non-responsive sleep-event "loop" to further demonstrate how each AutoPAP responds to a test dummy that doesn't respond at all to pressure. Or in other words: the study breaks the patient-to-machine feedback loop that would be absolutely crucial to any of those AutoPAP algorithms. Rhetorically speaking: how does that study design clarify more than confuse? It only shows that older AutoPAP models will present different initial pressures to obstructive events, then become algorithmically confused in an entire variety of ways by those artificial pressure-unresponsive breathing loops. I admittedly chuckled a bit to myself reading that study.

And, of course, that study also fails to take into account anything the proactive portion of each AutoPAP algorithm would attempt to prevent those sleep events in the first place. Gotta love studies like that.
Last edited by -SWS on Sat Mar 05, 2005 8:38 pm, edited 1 time in total.

User avatar
wading thru the muck!
Posts: 2799
Joined: Tue Oct 19, 2004 11:42 am

Post by wading thru the muck! » Sat Mar 05, 2005 8:38 pm

The study concludes that the data does not suggest that one unit is better than another, but I think it clearly shows the 418P (now the 420E) is superior to the rest. The autoset over-responded to flow limitations and had a decresed ability to respond when subject to leaks. The Tranquility also had this problem with the leaks and was not able to decrease pressure during the "normal" segments. The Devilbis seemed to not react specifically to anything. The goodknight 418P avoided overreacting to flow limitations, decreased pressure during the normal intervals and did not lose amplitude due to leaks.
Sincerely,
wading thru the muck of the sleep study/DME/Insurance money pit!

-SWS
Posts: 5301
Joined: Tue Jan 11, 2005 7:06 pm

Post by -SWS » Sat Mar 05, 2005 8:43 pm

Mucky, you couldn't clearly generalize that the 418P (now the 420e) is clearly superior to the rest based on that one breathing pattern....
...any more than you could have concluded that the 420e was inferior to the rest because it experienced pressure runaway problems with Rested Gal's, EEBROM's, Janelle's, UKnowWhatInSeattle's flow limitations. One limited breathing sample in a flawed study does not make for a sound generalized conclusion.

Especially since the artificial (non-responsive) breathing pattern clearly breaks the essential (patient-based) feedback loop that any algorithm would rely on. Just my opinion, though...

User avatar
wading thru the muck!
Posts: 2799
Joined: Tue Oct 19, 2004 11:42 am

Post by wading thru the muck! » Sat Mar 05, 2005 9:23 pm

-SWS,

I agree the study was flawed as it relates to the real world. But for the data sets shown the 418P responded the best to what the machines were subjected to, for whatever it's worth. A useful execise would be to create an artificial breathing machine that could replicate the wave form of a given individuals breathing patterns and be used to determine which machine best reponds to that individuals needs. Seems to me that it would not be difficult to construct and extreemly valuable. This ongoing patient compatability data could be used to improve future algorithms. I'm sure the Manufacturers would cringe at thought of a collection of real breathing pattern data that their machine could not resolve as well as the competition.
Sincerely,
wading thru the muck of the sleep study/DME/Insurance money pit!

day for night
Posts: 30
Joined: Wed Feb 23, 2005 10:26 am

Post by day for night » Sat Mar 05, 2005 9:40 pm

Thanks to everyone for the information. Absolutely incredible stuff. I knew my response would set off a debate. Understand that I'm only a month into this whole cpap thing, so while I questioned my doc, I didn't have a full knowledge base. I'm much more informed now and my next visit (scheduled for tuesday) should be a lively one. I do believe that my doc is really good and that he would prescribe an auto if I ask for it.
CPAP BLOWS! (get it? It blows and it "blows" haha wow I kill me!)

-SWS
Posts: 5301
Joined: Tue Jan 11, 2005 7:06 pm

Post by -SWS » Sat Mar 05, 2005 10:20 pm

A useful execise would be to create an artificial breathing machine that could replicate the wave form of a given individuals breathing patterns and be used to determine which machine best reponds to that individuals needs.
Wader, I think that truly would be useful. However, I think this artificial breathing machine really needs to be a much better "patient simulator" than the non-responsive one used in that study. Why? Because any AutoPAP algorithm relies on a patient-based feedback loop. A useful breathing simulator should also accomodate that patient-based feedback loop that is key to any AutoPAP's algorithm in my opinion.

User avatar
littlebaddow
Posts: 416
Joined: Wed Dec 08, 2004 12:21 pm
Location: Essex, England

Post by littlebaddow » Sun Mar 06, 2005 5:17 am

From an unscientific and un-researched viewpoint, I'd like to add my experience to this fascinating debate.

I was diagnosed as moderate after a single night sleep study. The doc gave me a loan machine for 6 nights - a resmed auto, though I don't remember the exact model - and based on the downloaded results, prescribed me at 11cm & told me I had 20 to 30 events an hour.

He then loaned me a fixed pressure machine, set at 10cm which I used with mixed results for about 3 weeks.

Based on reading in this forum, and with his agreement that auto machines have advantages, I then purchased my own machine - a remstar auto with cflex & heated humidifier. I also got the software and, for the last 23 days, bearing in mind my original px of 11cm, total % of time spent at each pressure is:

4cm 58%
5cm 20%
6cm 15%
7cm 9%
8cm 5%
9cm 1%
10cm 2%

During that period, my AHI has ranged from 0.5 to 1.9 with an average of 1.2 and the 90% has ranged from 5 to 8 with no discernible pattern.

The acid test is that I feel healthy and alert again, some 3 months after starting treatment.

I'm sure there are several possible explanations as to why the level needed now is different to the one originally prescribed, but the key point seems to be that a sleep study and the first few days use of a machine are no more than a snap shot, taken in unfamiliar circumstances. How can it be right to base a long period of treatment on those results?

_________________
MachineMask
Airsense 10 & Airfit N20

Mikesus
Posts: 1211
Joined: Wed Feb 09, 2005 6:50 pm

Post by Mikesus » Sun Mar 06, 2005 6:49 am

SWS, Wader, RG, There is one major factor missing in that study. Who paid for it. It very well might have been funded by a group of SLEEP CENTERS trying to prove that auto's don't work.

Unfortunately in this day and age, you not only need to know how they did the test, but WHY they did the test, and who paid for it.

User avatar
wading thru the muck!
Posts: 2799
Joined: Tue Oct 19, 2004 11:42 am

Post by wading thru the muck! » Sun Mar 06, 2005 7:14 am

-SWS,

I'm not talking about an artificial breathing machine. I'm talking about a machine in which you could input the actual wave form of the brathing patterns of an actual human being and then replicate it with each machine to see which works better. I understand that this may not be perfect because there is a certain amount of interaction between each machine and the user. I'm sure this type of thing IS being done by each individual manufacturer in the development of their algorithms. It would be nice to apply this technique to a range of machines and see how they compare for an individual user.

I guess we are probably getting ahead of ourselves. What we need is for several prestigeous universities to conduct independant studies regarding the efficacy of the APAP. Hopefuly these will demostrate the benefits that many of us have found in our own use.
Sincerely,
wading thru the muck of the sleep study/DME/Insurance money pit!

-SWS
Posts: 5301
Joined: Tue Jan 11, 2005 7:06 pm

Post by -SWS » Sun Mar 06, 2005 9:57 am

Mike, I think your point is a very valid one. Regardless of ulterior market-driven motive being at fault or simple human oversight, it's important for all of us to realize that even highly revered "medical studies" are not at all above being very poorly designed in my opinion. I have read plenty of medical studies with diametrically opposed conclusions. Very cool avatar, BTW!
I'm not talking about an artificial breathing machine. I'm talking about a machine in which you could input the actual wave form of the brathing patterns of an actual human being and then replicate it with each machine to see which works better.
Wader, the point I was trying to get accross is that simply lobbing a single and unresonsive recorded or artificial sleep event into an AutoPAP is in and of itself inherently flawed and of little use. An AutoPAP algorithm will very often require several iterations of: 1) pressure adjustment, 2) patient breath detection, and 3) pressure re-adustment based on step 2.

The machine responds to the patient. The patient then responds to the machine. The machine then reiteratively responds back to the patient for crucial adjustments. That is the two-way loop that is broken in the study and your proposed machine. There must be a patient response (simulated or real) to truly test the algorithm. To rely on lobbing one obstrutive sleep event (even repeatedly) is like assessing world-class tennis players using only a serving machine. It just doesn't make for any sort of useful comparison in my opinion.

Guest

Post by Guest » Sun Mar 06, 2005 10:07 am

I agree with SWS, or at least with what I perceive him to be saying...

I've never really liked simulations for studying effectiveness of something, because your test is limited both by the efficacy of your treatment AND the validity of your simulation.

The fact is that I'm just not convinced that a simulated breathing machine is going to provide exactly the breathing response I or any other human being will, and in building the simulation, we're basically saying "Hey, this is how we THINK it works, so let's test to see how well our machine reacts to what we think we'll see."

Liam, who breathes differently than anyone else. He breathes through his ears.

User avatar
Liam1965
Posts: 1184
Joined: Fri Jan 28, 2005 2:23 pm
Location: New Hampshire
Contact:

Post by Liam1965 » Sun Mar 06, 2005 10:10 am

Wow, I got "Guested". I'm usually pretty good at avoiding that.

That last guest was me, which I guess should be obvious, since I SIGNED the message.

Liam, Captain Obvious.

_________________
MachineMask

User avatar
wading thru the muck!
Posts: 2799
Joined: Tue Oct 19, 2004 11:42 am

Post by wading thru the muck! » Sun Mar 06, 2005 1:36 pm

-SWS,

Either you and I are speaking in different languages and not understanding each other or Liam has put some kind on a capella hex on us. How do these companies develop these algorithms? They must have apparatus to recreate particular breathing pattern wave forms to test different strategies for preventing them. My guess is for each individual there a limited number of obstructive wave forms that occur. If they can be exactly replicated and fed to each machine to see how it reacts, this would be useful information. I understand in a perfect world we would all just try each machine for six months and then see which one gave us the best results. Then we would hope that from the beginning to the end of this two year test period our requirements had not changed and made all the data irrelevant. My guess is the NTSB would like to do automobile crash tests with real people instead of crash dummies but they are lacking for volunteers.
Sincerely,
wading thru the muck of the sleep study/DME/Insurance money pit!

User avatar
Liam1965
Posts: 1184
Joined: Fri Jan 28, 2005 2:23 pm
Location: New Hampshire
Contact:

Post by Liam1965 » Sun Mar 06, 2005 2:15 pm

wading thru the muck! wrote:My guess is the NTSB would like to do automobile crash tests with real people instead of crash dummies but they are lacking for volunteers.
In my opinion, there is a difference. Crash test dummies can be made to perfectly mimic a human body, in so far as they're really just measuring device. A few joints in the right places, proper heights and weights, and you're there.

That is (to me) quite a lot different from trying to build a machine which mimics the subtleties of breath response in order to test a CPAP machine.

This probably comes from being a programmer, but I don't like to write programs to test my programs, and I don't like to test my own programs. Not because the work is tedious, but because as the author of the code, I know how I expect it to be used, and subconsciously, I'm going to test it in the same fashion. I need someone who DOESN'T know how it works to test it and maybe try something that I would never have thought to try, and see how the code reacts.

I've found the more complex the piece of software I have to write in order to test my other piece of software, the more likely that any discrepancies I find turn out to be in the testing code rather than the original, tested code. I feel the same way here. AutoPAPs are very complex and subtle in the way they work. I'm frankly astounded when I think of what they can accomplish through the very minimal input of measuring your breath and maybe listening for your snores.

They are SO complex and subtle that it requires an equally complex test case, and thus the chances are you will build a test case which is very complex, but not realistic to the real world, and then build an APAP which responds "perfectly" to the test case, but turns out not really to be of any use at all on people.

I think such a test machine might be useful during initial development, but ultimately it requires human trials and human subjects to perfect it, to make sure that you've perfected your algorithm to the real world complex situation.

Does that make sense?

Liam, who's really just trying to get out of testing his own code.

_________________
MachineMask

-SWS
Posts: 5301
Joined: Tue Jan 11, 2005 7:06 pm

Post by -SWS » Sun Mar 06, 2005 3:20 pm

Wader, my guess is that they start off with pure simulation. They simulate both the machine (since that detection/response algorithm is of the essence)---and they simulate patient breathing patterns. Once the AutoPAP is a physical reality (proto-type or otherwise) they would likely require a responsive breathing pattern (simulated or otherwise). Perhaps this example will clarify what I am trying to say:

Take the AutoPAP you currently use---the REMstar Auto. Recall exactly what it does to "non-responsive" apneas at over 8 cm. On that single obstructive sleep event (not a succession of several events) the REMstar Auto will attempt exactly three pressure increments before changing its pressure response altogther. At that point the REMstar Auto will actually back pressure down for fear of inducing central apneas. That was but a single obstructive event that the REMstar needed that two-way closed loop I referred to. That sequence on just one obstructive event went like this:


REMstar Auto Algorithmic technique for but a SINGLE sleep apnea event:
===================================================
1) apnea detected,
2) increase pressure for the first time
3) measure patient airflow
4) if airflow reflects "unresponsive" or uncorrected condition, increase pressure for the second time
5) measure patient airflow,
6) if airflow reflects "unresponsive" or uncorrected condition, increase pressure for the third time
7)measure patient airflow
8) if airflow still reflects "unresponsive" or uncorrected condition, decrease pressure for fear of inducing central apneas

Those eight steps require a two-way patient-to-machine feedback loop. Those eight steps are for but one obstructive sleep event. A breathing machine or simulator that cannot provide a patient response would actually be useless on the REMstar Auto. If that test machine could not breathe back (show a responsive airflow pattern) in response to those pressure increment attempts, then all apnea breathing patterns artifically introduced into the REMstar Auto would be interpreted and treated as typical "unresponsive apneas" verus typical "obstructive apneas". That's a broken test BIG TIME!

Do you see what I am getting at? The breathing simulator with the broken patient feedback loop used in the medical study above doesn't measure didley squat for the REMstar Auto.

Liam presents an altogether different point that is extremely valid in my opinion.