Friday, February 29, 2008

Cluster trial: Effectiveness of the diabetes education and self management



Thank you Bob for highlighting the 'cluster trial'. Seems they are on the right track of the design of a community interventional trial, aren't they?

_____________________________________________
From:  
Effectiveness of the diabetes education and self management for ongoing and newly diagnosed (DESMOND) programme for people with newly diagnosed type 2 diabetes: cluster randomised controlled trial

M J Davies, S Heller, T C Skinner, M J Campbell, M E Carey, S Cradock, H M Dallosso, H Daly, Y Doherty, S Eaton, C Fox, L Oliver, K Rantell, G Rayman, K Khunti on behalf of the Diabetes Education and Self Management for Ongoing and Newly Diagnosed Collaborative

 << File: davies.pdf >>

Wednesday, February 27, 2008

Interesting article on behaviorial economics


What was I thinking? The latest reasoning about our irrational ways.
by
Elizabeth Kolbert
 
A couple of months ago, I went on-line to order a book. The book had a list price of twenty-four dollars; Amazon was offering it for eighteen. I clicked to add it to my "shopping cart" and a message popped up on the screen. "Wait!" it admonished me. "Add $7.00 to your order to qualify for FREE Super Saver Shipping!" I was ordering the book for work; still, I hesitated. I thought about whether there were other books that I might need, or want. I couldn't think of any, so I got up from my desk, went into the living room, and asked my nine-year-old twins. They wanted a Tintin book. Since they already own a large stack of Tintins, it was hard to find one that they didn't have. They scrolled through the possibilities. After much discussion, they picked a three-in-one volume containing two adventures they had previously read. I clicked it into the shopping cart and checked out. By the time I was done, I had saved The New Yorker $3.99 in shipping charges. Meanwhile, I had cost myself $12.91.


Why do people do things like this? From the perspective of neoclassical economics, self-punishing decisions are difficult to explain. Rational calculators are supposed to consider their options, then pick the one that maximizes the benefit to them. Yet actual economic life, as opposed to the theoretical version, is full of miscalculations, from the gallon jar of mayonnaise purchased at spectacular savings to the billions of dollars Americans will spend this year to service their credit-card debt. The real mystery, it could be argued, isn't why we make so many poor economic choices but why we persist in accepting economic theory.
In "Predictably Irrational: The Hidden Forces That Shape Our Decisions" (Harper; $25.95), Dan Ariely, a professor at M.I.T., offers a taxonomy of financial folly. His approach is empirical rather than historical or theoretical. In pursuit of his research, Ariely has served beer laced with vinegar, left plates full of dollar bills in dorm refrigerators, and asked undergraduates to fill out surveys while masturbating. He claims that his experiments, and others like them, reveal the underlying logic to our illogic. "Our irrational behaviors are neither random nor senselessthey are systematic," he writes. "We all make the same types of mistakes over and over." So attached are we to certain kinds of errors, he contends, that we are incapable even of recognizing them as errors. Offered FREE shipping, we take it, even when it costs us.


As an academic discipline, Ariely's fieldbehavioral economicsis roughly twenty-five years old. It emerged largely in response to work done in the nineteen-seventies by the Israeli-American psychologists Amos Tversky and Daniel Kahneman. (Ariely, too, grew up in Israel.) When they examined how people deal with uncertainty, Tversky and Kahneman found that there were consistent biases to the responses, and that these biases could be traced to mental shortcuts, or what they called "heuristics." Some of these heuristics were pretty obviouspeople tend to make inferences from their own experiences, so if they've recently seen a traffic accident they will overestimate the danger of dying in a car crashbut others were more surprising, even downright wacky. For instance, Tversky and Kahneman asked subjects to estimate what proportion of African nations were members of the United Nations. They discovered that they could influence the subjects' responses by spinning a wheel of fortune in front of them to generate a random number: when a big number turned up, the estimates suddenly swelled.


Though Tversky and Kahneman's research had no direct bearing on economics, its implications for the field were disruptive. Can you really regard people as rational calculators if their decisions are influenced by random numbers? (In 2002, Kahneman was awarded a Nobel PrizeTversky had died in 1996for having "integrated insights from psychology into economics, thereby laying the foundation for a new field of research.")
Over the years, Tversky and Kahneman's initial discoveries have been confirmed and extended in dozens of experiments. In one example, Ariely and a colleague asked students at M.I.T.'s Sloan School of Management to write the last two digits of their Social Security number at the top of a piece of paper. They then told the students to record, on the same paper, whether they would be willing to pay that many dollars for a fancy bottle of wine, a not-so-fancy bottle of wine, a book, or a box of chocolates. Finally, the students were told to write down the maximum figure they would be willing to spend for each item. Once they had finished, Ariely asked them whether they thought that their Social Security numbers had had any influence on their bids. The students dismissed this idea, but when Ariely tabulated the results he found that they were kidding themselves. The students whose Social Security number ended with the lowest figures00 to 19were the lowest bidders. For all the items combined, they were willing to offer, on average, sixty-seven dollars. The students in the second-lowest group20 to 39were somewhat more free-spending, offering, on average, a hundred and two dollars. The pattern continued up to the highest group80 to 99whose members were willing to spend an average of a hundred and ninety-eight dollars, or three times as much as those in the lowest group, for the same items.
This effect is called "anchoring," and, as Ariely points out, it punches a pretty big hole in microeconomics. When you walk into Starbucks, the prices on the board are supposed to have been determined by the supply of, say, Double Chocolaty Frappuccinos, on the one hand, and the demand for them, on the other. But what if the numbers on the board are influencing your sense of what a Double Chocolaty Frappuccino is worth? In that case, price is not being determined by the interplay of supply and demand; price is, in a sense, determining itself.


Another challenge to standard economic thinking arises from what has become known as the "endowment effect." To probe this effect, Ariely, who earned one of his two Ph.D.s at Duke, exploited the school's passion for basketball. Blue Devils fans who had just won tickets to a big game through a lottery were asked the minimum amount that they would accept in exchange for them. Fans who had failed to win tickets through the same lottery were asked the maximum amount that they would be willing to offer for them.
"From a rational perspective, both the ticket holders and the non-ticket holders should have thought of the game in exactly the same way," Ariely observes. Thus, one might have expected that there would be opportunities for some of the lucky and some of the unlucky to strike deals. But whether or not a lottery entrant had been "endowed" with a ticket turned out to powerfully affect his or her sense of its value. One of the winners Ariely contacted, identified only as Joseph, said that he wouldn't sell his ticket for any price. "Everyone has a price," Ariely claims to have told him. O.K., Joseph responded, how about three grand? On average, the amount that winners were willing to accept for their tickets was twenty-four hundred dollars. On average, the amount that losers were willing to offer was only a hundred and seventy-five dollars. Out of a hundred fans, Ariely reports, not a single ticket holder would sell for a price that a non-ticket holder would pay.


Whatever else it accomplishes, "Predictably Irrational" demonstrates that behavioral economists are willing to experiment on just about anybody. One of the more compelling studies described in the book involved trick-or-treaters. A few Halloweens ago, Ariely laid in a supply of Hershey's Kisses and two kinds of Snickersregular two-ounce bars and one-ounce miniatures. When the first children came to his door, he handed each of them three Kisses, then offered to make a deal. If they wanted to, the kids could trade one Kiss for a mini-Snickers or two Kisses for a full-sized bar. Almost all of them took the deal and, proving their skills as sugar maximizers, opted for the two-Kiss trade. At some point, Ariely shifted the terms: kids could now trade one of their three Kisses for the larger bar or get a mini-Snickers without giving up anything. In terms of sheer chocolatiness, the trade for the larger bar was still by far the better deal. But, faced with the prospect of getting a mini-Snickers for nothing, the trick-or-treaters could no longer reckon properly. Most of them refused the trade, even though it cost them candy. Ariely speculates that behind the kids' miscalculation was anxiety. As he puts it, "There's no visible possibility of loss when we choose a FREE! item (it's free)." Tellingly, when Ariely performed a similar experiment on adults, they made the same mistake. "If I were to distill one main lesson from the research described in this book, it is that we are all pawns in a game whose forces we largely fail to comprehend," he writes.


A few weeks ago, the Bureau of Economic Analysis released its figures for 2007. They showed that Americans had collectively amassed ten trillion one hundred and eighty-four billion dollars in disposable income and spent very nearly all of itten trillion one hundred and thirty-two billion dollars. This rate of spending was somewhat lower than the rate in 2006, when Americans spent all but thirty-nine billion dollars of their total disposable income.


According to standard economic theory, the U.S. savings rate also represents rational choice: Americans, having reviewed their options, have collectively resolved to spend virtually all the money that they have. According to behavioral economists, the low savings rate has a more immediate explanation: it provesyet againthat people have trouble acting in their own best interests. It's worth noting that Americans, even as they continue to spend, say that they should be putting more money away; one study of participants in 401(k) plans found that more than two-thirds believed their savings rate to be "too low."
In the forthcoming "Nudge: Improving Decisions About Health, Wealth, and Happiness" (Yale; $25), Richard H. Thaler and Cass R. Sunstein follow behavioral economics out of the realm of experiment and into the realm of social policy. Thaler and Sunstein both teach at the University of Chicago, Thaler in the graduate school of business and Sunstein at the law school. They share with Ariely the belief that, faced with certain options, people will consistently make the wrong choice.Therefore, they argue, people should be offered options that work with, rather than against, their unreasoning tendencies. These foolish-proof choices they label "nudges." (A "nudge," they note with scholarly care, should not be confused with a "noodge.")


A typical "nudge" is a scheme that Thaler and Sunstein call "Save More Tomorrow." One of the reasons people have such a hard time putting money away, the authors say, is that they are loss-averse. They are pained by any reduction in their take-home payeven when it's going toward their own retirement. Under "Save More Tomorrow," employees commit to contributing a greater proportion of their paychecks to their retirement over time, but the increases are scheduled to coincide with their annual raises, so their paychecks never shrink. (The "Save More Tomorrow" scheme was developed by Thaler and the U.C.L.A. economist Shlomo Benartzi, back in 1996, and has already been implemented by several thousand retirement plans.)


People aren't just loss-averse; they are also effort-averse. They hate having to go to the benefits office, pick up a bunch of forms, fill them out, and bring them all the way back. As a consequence, many eligible employees fail to enroll in their companies' retirement plans, or delay doing so for years. (This is the case, research has shown, even at companies where no employee contribution is required.) Thaler and Sunstein propose putting this sort of inertia to use by inverting the choice that's presented. Instead of having to make the trip to the benefits office to opt in, employees should have to make that trip only if they want to opt out. The same basic argument holds whenever a so-called default option is provided. For instance, most states in the U.S. require that those who want to become organ donors register their consent; in this way, many potential donors are lost. An alternativeused, for example, in Austriais to make consent the default option, and put the burden of registering on those who do not wish to be donors. (It has been estimated that if every state in the U.S. simply switched from an "explicit consent" to a "presumed consent" system several thousand lives would be saved each year.)


"Nudges" could also involve disclosure requirements. To discourage credit-card debt, for instance, Thaler and Sunstein recommend that cardholders receive annual statements detailing how much they have already squandered in late fees and interest. To encourage energy conservation, they propose that new cars come with stickers showing how many dollars' worth of gasoline they are likely to burn through in five years of driving.
Many of the suggestions in "Nudge" seem like good ideas, and even, as with "Save More Tomorrow," practical ones. The whole project, though, as Thaler and Sunstein acknowledge, raises some pretty awkward questions. If the "nudgee" can't be depended on to recognize his own best interests, why stop at a nudge? Why not offer a "push," or perhaps even a "shove"? And if people can't be trusted to make the right choices for themselves how can they possibly be trusted to make the right decisions for the rest of us?
Like neoclassical economics, much democratic theory rests on the assumption that people are rational. Here, too, empirical evidence suggests otherwise. Voters, it has been demonstrated, are influenced by factors ranging from how names are placed on a ballot to the jut of a politician's jaw. A 2004 study of New York City primary-election results put the advantage of being listed first on the ballot for a local office at more than three per centenough of a boost to turn many races. (For statewide office, the advantage was around two per cent.) A 2005 study, conducted by psychologists at Princeton, showed that it was possible to predict the results of congressional contests by using photographs. Researchers presented subjects with fleeting images of candidates' faces. Those candidates who, in the subjects' opinion, looked more "competent" won about seventy per cent of the time.


When it comes to public-policy decisions, people exhibit curiousbut, once again, predictablebiases. They value a service (say, upgrading fire equipment) more when it is described in isolation than when it is presented as part of a larger good (say, improving disaster preparedness). They are keen on tax "bonuses" but dislike tax "penalties," even though the two are functionally equivalent. They are more inclined to favor a public policy when it is labelled the status quo. In assessing a policy's benefits, they tend to ignore whole orders of magnitude. In an experiment demonstrating this last effect, sometimes called "scope insensitivity," subjects were told that migrating birds were drowning in ponds of oil. They were then asked how much they would pay to prevent the deaths by erecting nets. To save two thousand birds, the subjects were willing to pay, on average, eighty dollars. To save twenty thousand birds, they were willing to pay only seventy-eight dollars, and to save two hundred thousand birds they were willing to pay eighty-eight dollars.
What is to be done with information like this? We can try to become more aware of the patterns governing our blunders, as "Predictably Irrational" urges. Or we can try to prod people toward more rational choices, as "Nudge" suggests. But if we really are wired to make certain kinds of mistakes, as Thaler and Sunstein and Ariely all argue, we will, it seems safe to predict, keep finding new ways to make them. (Ariely confesses that he recently bought a thirty-thousand-dollar car after reading an ad offering FREE oil changes for the next three years.)


If there is any consolation to take from behavioral economicsand this impulse itself probably counts as irrationalit is that irrationality is not always altogether a bad thing. What we most value in other people, after all, has little to do with the values of economics. (Who wants a friend or a lover who is too precise a calculator?) Some of the same experiments that demonstrate people's weak-mindedness also reveal, to use a quaint term, their humanity. One study that Ariely relates explored people's willingness to perform a task for different levels of compensation. Subjects were willing to help outmoving a couch, performing a tedious exercise on a computerwhen they were offered a reasonable wage. When they were offered less, they were less likely to make an effort, but when they were asked to contribute their labor for nothing they started trying again. People, it turns out, want to be generous and they want to retain their dignityeven when it doesn't really make sense.

The Advantages of Closing a Few Doors

 
By JOHN TIERNEY
The next time you’re juggling options which friend to see, which house to buy, which career to pursue try asking yourself this question:
Xiang Yu was a Chinese general in the third century B.C. who took his troops across the Yangtze River into enemy territory and performed an experiment in decision making. He crushed his troops’ cooking pots and burned their ships.
He explained this was to focus them on moving forward a motivational speech that was not appreciated by many of the soldiers watching their retreat option go up in flames. But General Xiang Yu would be vindicated, both on the battlefield and in the annals of social science research.
He is one of the role models in Dan Ariely’s new book, “Predictably Irrational,” an entertaining look at human foibles like the penchant for keeping too many options open. General Xiang Yu was a rare exception to the norm, a warrior who conquered by being unpredictably rational.
Most people can’t make such a painful choice, not even the students at a bastion of rationality like the Massachusetts Institute of Technology, where Dr. Ariely is a professor of behavioral economics. In a series of experiments, hundreds of students could not bear to let their options vanish, even though it was obviously a dumb strategy (and they weren’t even asked to burn anything).
The experiments involved a game that eliminated the excuses we usually have for refusing to let go.. In the real world, we can always tell ourselves that it’s good to keep options open.
You don’t even know how a camera’s burst-mode flash works, but you persuade yourself to pay for the extra feature just in case. You no longer have anything in common with someone who keeps calling you, but you hate to just zap the relationship.
Your child is exhausted from after-school soccer, ballet and Chinese lessons, but you won’t let her drop the piano lessons. They could come in handy! And who knows? Maybe they will.
In the M.I.T. experiments, the students should have known better. They played a computer game that paid real cash to look for money behind three doors on the screen. (You can play it yourself, without pay, at tierneylab.blogs.nytimes.com.) After they opened a door by clicking on it, each subsequent click earned a little money, with the sum varying each time.
As each player went through the 100 allotted clicks, he could switch rooms to search for higher payoffs, but each switch used up a click to open the new door. The best strategy was to quickly check out the three rooms and settle in the one with the highest rewards.
Even after students got the hang of the game by practicing it, they were flummoxed when a new visual feature was introduced. If they stayed out of any room, its door would start shrinking and eventually disappear.
They should have ignored those disappearing doors, but the students couldn’t. They wasted so many clicks rushing back to reopen doors that their earnings dropped 15 percent. Even when the penalties for switching grew stiffer besides losing a click, the players had to pay a cash fee the students kept losing money by frantically keeping all their doors open.
Why were they so attached to those doors? The players, like the parents of that overscheduled piano student, would probably say they were just trying to keep future options open. But that’s not the real reason, according to Dr. Ariely and his collaborator in the experiments, Jiwoong Shin, an economist who is now at Yale.
They plumbed the players’ motivations by introducing yet another twist. This time, even if a door vanished from the screen, players could make it reappear whenever they wanted. But even when they knew it would not cost anything to make the door reappear, they still kept frantically trying to prevent doors from vanishing.
Apparently they did not care so much about maintaining flexibility in the future. What really motivated them was the desire to avoid the immediate pain of watching a door close.
“Closing a door on an option is experienced as a loss, and people are willing to pay a price to avoid the emotion of loss,” Dr. Ariely says. In the experiment, the price was easy to measure in lost cash. In life, the costs are less obvious wasted time, missed opportunities. If you are afraid to drop any project at the office, you pay for it at home.
“We may work more hours at our jobs,” Dr. Ariely writes in his book, “without realizing that the childhood of our sons and daughters is slipping away. Sometimes these doors close too slowly for us to see them vanishing.”
Dr. Ariely, one of the most prolific authors in his field, does not pretend that he is above this problem himself. When he was trying to decide between job offers from M.I.T. and Stanford, he recalls, within a week or two it was clear that he and his family would be more or less equally happy in either place. But he dragged out the process for months because he became so obsessed with weighing the options.
“I’m just as workaholic and prone to errors as anyone else,” he says.. “I have way too many projects, and it would probably be better for me and the academic community if I focused my efforts. But every time I have an idea or someone offers me a chance to collaborate, I hate to give it up.”
So what can be done? One answer, Dr. Ariely said, is to develop more social checks on overbooking. He points to marriage as an example: “In marriage, we create a situation where we promise ourselves not to keep options open. We close doors and announce to others we’ve closed doors.”
Or we can just try to do it on our own. Since conducting the door experiments, Dr. Ariely says, he has made a conscious effort to cancel projects and give away his ideas to colleagues. He urges the rest of us to resign from committees, prune holiday card lists, rethink hobbies and remember the lessons of door closers like Xiang Yu.
If the general’s tactics seem too crude, Dr. Ariely recommends another role model, Rhett Butler, for his supreme moment of unpredictable rationality at the end of his marriage. Scarlett, like the rest of us, can’t bear the pain of giving up an option, but Rhett recognizes the marriage’s futility and closes the door with astonishing elan. Frankly, he doesn’t give a damn.

Tuesday, February 26, 2008

Use of Alternative Thresholds Defining Insulin Resistance to Predict Incident Type 2 Diabetes Mellitus and Cardiovascular Disease -- Rutter et al. 117 (8): 1003 -- Circulation

Use of Alternative Thresholds Defining Insulin Resistance to Predict Incident Type 2 Diabetes Mellitus and Cardiovascular Disease
Source: Circulation

Background- The performance characteristics of surrogate insulin resistance (IR) measures, commonly defined as the top 25% of the measure's distribution, used to predict incident type 2 diabetes mellitus (DM) and cardiovascular disease (CVD) have not been critically assessed in community samples.  Methods and Results- Baseline IR was assessed among 2720 Framingham Offspring Study subjects by use of fasting insulin, the homeostasis model assessment of IR (HOMA-IR), and the reciprocal of the Gutt insulin sensitivity index, with 7- to 11-year follow-up for incident DM (130 cases) or CVD (235). Area under the receiver operating characteristic curve, sensitivity, specificity, and positive likelihood ratio were estimated at 12 diagnostic thresholds (quantiles) of IR measures. Positive likelihood ratios for DM or CVD increased in relation to IR quantiles; risk gradients were greater for DM than for CVD, with no 9th to 10th quantile (76th centile) threshold effects. IR had better discrimination for incident DM than for CVD (HOMA-IR area under the receiver operating characteristic curve: DM 0.80 versus CVD 0.63). The HOMA-IR 76th centile threshold was associated with these test-performance values: sensitivity (DM 68%, CVD 40%), specificity (DM 77%, CVD 76%), and positive likelihood ratio (DM 3.0, CVD 1.7). The HOMA-IR threshold that yielded >90% sensitivity was the 6th quantile for DM prediction and the 3rd quantile for CVD. Compared with the 76th centile threshold, these alternative thresholds yielded lower specificity (DM 43%, CVD 17%) and positive likelihood ratios (DM 1.6, CVD 1.1).  Conclusions- Surrogate IR measures have modest performance at the 76th centile, with no threshold effects. Different centile thresholds might be selected to optimize sensitivity versus specificity for DM versus CVD prediction if  surrogate IR measures are used for risk prediction.