Sunday, 29 January 2017

Review: The Social Animal by David Brooks

This is, in many ways, a strange book to read. Firstly, it’s in an uncomfortable space between fiction and non-fiction; the author aims to teach us about the subconscious workings of the brain and their importance for everyday life, but he does it in a highly unscientific way and tells the stories of two fictional people throughout their lives, throwing in research findings every so often as if a freezeframe has been taken. Another aspect of the book that threw me off balance was its belief system; I found that I liked the trees (the individual research findings and titbits thrown in) but disliked the forest, which held sweeping generalizations and a religious ethos.


So I read the book enjoying learning all these little facts and quite liking the fictional characters (I identified quite strongly with Erica, a girl from an unstable background with a fiery ambition and determination who grows up to become CEO of a company and deputy chief of staff in the White House. (Harold, her partner, grew up in a very privileged background and became an academic, with little ambition – it was interesting to see that dynamic I’m very familiar with, that of people from less privileged backgrounds being highly driven, reflected in the book.)) But there was a great deal of cognitive dissonance because of how present the author’s religious and subconscious-favouring value system was. Now, I welcome reading books that give me a different viewpoint (even though it Is of course more fun to confirm your biases) but with this I felt that it took that viewpoint as gospel and argued from there, rather than taking the facts and then proving the thesis.


The weird style of this book makes it difficult to review. For a novel, I’d study it along the axes of character, plot, originality, etc, whereas with non-fiction I’d generally comment on interesting information and the quality of writing. This is a hybrid, so the review is going to have to be one too. 

Characters

To treat this like a novel -- I liked Erica and found many of the side-characters interesting, if a bit (a lot) one-dimensional. But it was very clear that they were there to prove a point. Rather than being informative, the book often came across as being preachy.

Information

That said, it did have lots of information, which I enjoyed! 

In the first chapter, there were some interesting stats on mating competitiveness and on how one less desirable trait can be compensated for with a more desirable one, e.g. men who are 5 ft can compete equally with men who are 6 ft on dating sites when they make $175,000 a year more. (That is a shockingly large difference.)

Also interesting was the study of how reason and rationality are not the be-all and end-all. This was illustrated with the stories of men who had suffered frontal lobe damage and couldn't process emotion. One of them was asked to choose between two dates for a next meeting and spend half an hour listing pros and cons of each date, then was totally fine with the choice the doctor made when the doctor eventually decided for him. For people with poor EQ, making decisions is very difficult, because sometimes things are equivalent and we need emotion to push us to one decision or another. 

In 1981, a man stuck his tongue out at a 42-minute old infant and the infant stuck its tongue back out at him. This was incredible because of course the child couldn't have had time to consciously learn what it meant to stick your tongue out, but it recognised that this weird visual signal it was seeing was coming from the man moving his mouth, whatever the concept of mouth meant, and the baby could stick its own tongue out too. 

Infants expect a rolling ball to keep rolling and have a sense of mathematical proportion.

Something that alarmed me a bit is the effect early maternal treatment has on babies. There are securely attached and avoidantly attached children; securely attached children (66%) cry when their mother leaves them in a room and rush back to her when she comes back. 20% don't react. Securely attached children cope better with stress - when one gets an injection, it cries but its blood cortisol levels don't rise. Avoidantly attached children learn quickly that they can't rely on others and have to take care of themselves. They are independent but suffer from "chronic anxiety and are unsure in social situations". They can be great at logical discussion but uncomfortable with emotions. They are three times more likely to be alone aged seventy. In one study, 40% of people who had been abused as children went on to abuse their own children.

A study at the University of Kansas found that by the time they are four, children raised in poor families have heard 32 million fewer words than children raised in professional families. Students from the poorest quarter of the population have an 8.6% chance of survival, while those form the top quarter have a 75% chance.

Something interesting about traits of highly driven people. "Ultra-driven people are often plagued by a deep sense of existential danger. Historians have long noticed that an astonishing percentage of thee greatest writers, musicians, artists and leaders had a parent die or abandon them while they were between the ages of nine and fifteen. The list includes Washington, Jefferson, Hamilton, Lincoln, Hitler, Gandhi and Stalin, just to name a few."

I enjoyed the part of the story that dealt with political campaigning and Erica's role as deputy Chief of Staff, because I'm a politics junkie. The book had some interesting things to say about how people choose parties and candidates, and it certainly is not rational.

In short:

Long, wide-ranging book with interesting facts and decent characters but a dodgy message and lots of cognitive dissonance. Read it if you're looking for a hybrid.

Monday, 16 January 2017

BT Young Scientist Exhibition 2017: Autism, Antibiotics & Automatons

Hey guys! In what has become a 5-year tradition, I visited the BTYSTE on Saturday to meet up with old friends and be inspired, and here I am with the low-down on all the projects and stands that jumped out at me. 

(First, though, a shoutout to my friend GrĂ¡inne, who traipsed around the RDS with me for six hours.)





My Mentees

This year, like last year, I mentored Young Scientist students in my school. Niamh has been my mentee since the 2014 Young Scientist, and it's been amazing to see how much she's learned and grown. This year, she competed in Senior Individual Biology, and she won her category! Her project was about the antimicrobial properties of tree bark, and she found some promising results. The other student from my school who did a project this year was Judith, who mathematically studied frieze patterns in the Book of Kells and the Lindisfarne Gospels. She won a display award and the Williams Lea Tag Special Award. As always, Ms. O' Regan deserves a shoutout for all her hard work in facilitating the projects.




BT Bootcamp

I met lots of people from BT Bootcamp -- one person who had a project, three people who had Entrepreneur/BT Alumni stands, and some more we bumped into around the hall. 




John's project was cool - he fed a deep neural network a corpus of 20,000 items to teach it to distinguish between offensive and harmless statements. I saw something similar in the Google Science fair a few years ago -- could be interesting to see if it's ever implemented on a large scale. 

I had a nice chat with the guys from betterexaminations.ie, who came in to give us a talk at BT Bootcamp two years ago, and saw plenty of other familiar faces too. 

Top Wall

I joined the other 50,000 people contributing to the big winners' immense tiredness by checking out the projects that won the top prizes, and found a couple of interesting projects. 

I talked to Cormac Larkin, who won his category, the Intel award for overall best Physics/Chemistry/Maths project and Individual Runner-Up, because we'd been talking on Twitter the night before. He found massive stars in the Magellanic cloud using a method that fell out of use because it was useless for finding parameters of stars, but he said it was fine because he only wanted to identify the stars. He used data mining to cut out white dwarfs and found good candidates 

Shane Curran, the overall winner, had an interesting project using post-quantum cryptography, but unfortunately he wasn't around his stand when I checked so I didn't really get to understand it. Fun fact though: he came and spoke to us at Drogheda Young Innovators two years ago, after he'd won overall runner-up with Chemical.io. 

I was fond of a project called Micontact, which aimed to make learning how to make eye contact more fun for autistic people. I was getting a bit annoyed when I heard mentions of Applied Behavioural Analysis and thought it might be forcing autistic people to make eye contact, but then I learned that the girl who did it is actually autistic herself and we had a really cool chat about how people think autistic people are utterly incapable and how we're motivated to prove that wrong. It was great to see some autistic self-representation on the winners' wall. 

Finally, I talked to the guys who won the Analog Devices award for Best Technology Project (mainly because I'm friends with the brother of one of the students). They used Lego and 3D printing to make a set of legs that walk in a human-like way using antagonistic "muscles". 




Miscellaneous

I had an interesting chat with some guys who won first in their category for an epidemiological model of how colds spread in schools. I'd seen something similar with Claire Gregg's project on agent-based modelling of the spread of Ebola, which won her a spot as a regional finalist in the Google Science fair and more. I like epidemiology and they were cool guys.




I was delighted to find a project on antibiotic resistance and the public's knowledge of it, since educating people on antibiotic resistance is a big thing for me.



It definitely felt like there was something different in BTYSTE this year -- it might've been having more years of experience and seeing what gets repeated (even aside from the old reliables of farming and social media), or being further away from my time there. There was certainly a shift from university-based projects to more home-grown ones. This had upsides and downsides, but I do believe that the best projects are done on topics that the researchers can manage and cover, experimentally design and statistically analyse solidly.


So that's it over for another year. In the next few weeks, next year's winning projects will probably start being planned, and this incredible celebration of Irish teenage ingenuity will go all over again. The BTYSTE isn't perfect, but I think it's wonderful for two main reasons (a) we see how motivated and talented Irish students are to discover in their own free time (b) all the theatrics and the way Irish media descends on the Exhibition demonstrate that Ireland cares about science and the hard work of our teenagers.



Wednesday, 11 January 2017

Discussion: Superintelligence by Nick Bostrom

This post is framed as a discussion rather than a review because frankly, I don't feel qualified to review this book. It's a very academic book that was honestly barely within my reading capacity, so I don't think I can say whether it was good or bad because it was so far above everything I'd read previously on the topic.

Suffice to say that it has completely changed my attitudes to Artificial Intelligence, that it is a very comprehensive book and that I (along with Bill Gates, Elon Musk, Nils Nilsson, Martin Rees and other luminaries) recommend it for any intelligent person -- as long as you're willing to work at it, because this is not a light read. 

(I mean it -- the book is 260 pages of sentences like: "Anthropics, the study of how to make inferences from indexical information in the presence of observational selection effects, is another area where the choice of epistemic axioms could prove pivotal." It actually doesn't require any prior knowledge of computers or philosophy, the language is just consistently highbrow. It's certainly interesting, but it's interesting in the same way physics is interesting -- you have to work for it.)



In short, Superintelligence offers an aerial view of AI, starting from how we could get to superintelligence, taking us through its possible dangers and then elucidating some possible methods of avoiding having our entire universe turned to paperclips, our own bodies included.

Two posts on waitbutwhy.com are a large part of what got me into this topic, and if you don't want to get a more thorough understanding by slogging through the book they offer a much more enjoyable and easy (but still worthwhile) view of the topic. Part 1, The Artificial Intelligence Revolution; Part 2, Our Immortality or Extinction

So! Time for a brief discussion of the points I found most interesting.

Chapter 2: Paths to Superintelligence

Bostrom lays out the paths to five different kinds of superintelligence: AI (entirely software based), whole brain emulation (in which human brains are mapped and transferred to digital substrates so they're still themselves but can think way faster and are less vulnerable to harm, etc., biological cognition (humans still in human bodies but improved by better nutrition, education, gene editing and selective breeding), brain-computer interfaces (essentially what we have now with Google but internal), and networks (collective superintelligence). 

I found this interesting in that I hadn't really thought of collective superintelligence as a thing before. His discussion of the merits of each was interesting (e.g. biological cognition more familiar and less dangerous but far slower if it's using the selective breeding path, AI completely unfamiliar but we have more control over its design). 

Chapter 3: Forms of Superintelligence

In this chapter, Bostrom elaborates on three types of superintelligence: quality, speed and collective.

Quality superintelligence is what I always thought of as superintelligence -- a mind that just better, the mind of a genius, that can make leaps no one else can no matter how hard they try, that can understand more. We have quality superintelligence compared to an ant; no matter how long you gave an ant to understand algebra, it wouldn't. We just think on a different plane.

Speed superintelligence exists, I would think, in computers now, which can perform many orders of magnitude more calculations per second than humans can. It means that a problem that might take a team of human workers 10 years to do could be done by a computer in 10 seconds, as long as it didn't require intelligence of a quality the computer doesn't possess. This is why we use calculators and supercomputers to do our maths.

Collective superintelligence is quite interesting -- it's the cumulative intelligence of a community if all worked together in perfect harmony and intelligences could be linearly added. This seems unlikely in humans but could definitely work in a network of computers.

Most interesting was Bostrom's discussion of where each of these superintelligences would come in useful. Collective superintelligence is most useful when a project can be broken down into many small, independent parts that can be done in parallel. Speed superintelligence is useful for a project that can be broken down into parts that can be done in series. And quality superintelligence is useful when you need leaps of logic or intuition or genius that nothing else can manage.

Chapter 6: Cognitive Superpowers

This chapter broke down a superintelligence's potential cognitive superpowers into six categories: (a) intelligence amplification (recursively improving its own intelligence) (b) strategizing to achieve distance goals and overcome an intelligent opposition, e.g. humans (c) social manipulation (like convincing the researchers to let it connect to the internet) (d) hacking (e) technology research, for space colonisation, military force, nanotech assembly to act as its arms... (f) economic productivity, so it could earn money to buy e.g. hardware, influence if it didn't want to take them by force.

The chapter explains that a superintelligence with an intelligence amplification superpower could get all the other superpowers, and that in general if an AI has one of the six superpowers the others will soon follow. Also, something interesting that appears throughout the book is the idea of an AI-complete problem, saying that if a certain kind of problem with AI has been mastered then this will only have come after all of AI is mastered (e.g natural language recognition and understanding). 

Chapter 8: Is the default outcome doom?

This chapter was very interesting and scary. It lays out a case for why we can never be too careful with AI -- not only are we constantly thinking of more ways an AI could turn malicious against our wishes, a superintelligent AI would by definition be capable of thinking in more ways than us. It could complete a treacherous turn, seeming docile and friendly while "boxed" but when let out turning harmful.

A discussion of malignant failure modes followed:

1. Perverse instantiation: we tell the AI to do something, and it follow our command according to its interpretation rather than ours, e.g. we set its final goal as maximising human happiness, and it puts all of our brains in vats with electrodes stimulating the pleasure pathways in our brains.

2. Infrastructure profusion: we are not precise enough with the AI, and it destroys the universe innocently trying to reach some other goal, e.g. we tell it to come up with a mathematical proof and it realises there's some probability that it did it wrong and so it spends eternity checking over its answer again and again and turns the entire universe, including our bodies, into hardware to run more calculations on so it can keep checking its work, killing us all. Lethal perfectionism, if you will. Bostrom laid out lots of ways this infrastructure profusion could happen, and it's pretty scary how even with an innocent goal like increasing the number of paperclips in the world the superintelligence could kill us all for resources to reach that goal, by concerting "first the Earth and then increasing portions of the observable universe into paperclips". It really hammered in the point that a superintelligence could be highly rational and capable, but neither of those things requires that it have common sense

3. Mind crime: do whole brain emulations count as people? If an evolutionary selection algorithm is employed to come up with an intelligent machine and all the poor performers are killed for being intelligent but not intelligent enough, is that murder? To study human psychology, an AI might create trillions of conscious simulations of human brains and experiment on causing them pain and pleasure and kill them afterwards, much like today's scientists do with lab rats. 

Chapter 9: The Control Problem

This chapter discusses various ways we might be able to control an AI's capabilities and motivations.

Capability could be controlled via:
(a) boxing - system is blocked off from the external world e.g. no internet and in cage
(b) incentive - system incentivised by reward tokens or social integration with other superintelligences
(c) stunting - system is built with key handicaps so that it can't get too intelligent
(d) tripwires - diagnostic tests performed and system changed or shut down if signs of too much power or some dangerous intention are seen

Motivation (what the machine wants to do) could be controlled via:
(a) Direct specification - 
(b) Domesticity - the system will only want to have a certain probability of being correct, or it will only want to control a small number of things, so that it doesn't convert the universe into paperclips
(c) Indirect normativity - essentially a way to push off specifying the motivation system
(d) Augmentation - start with a humanlike system and make it more intelligent

Chapter 10: Oracles, Genies, Sovereigns, Tools

This was really interesting. An oracle answers questions, a genie does what you tell it to for some specific defined goal, a sovereign does what it wants in the service of some broader goal you've set (like "cure cancer") and a tool is like today's software, like a flight control assistant. 

There are control issues with all of these -- an oracle seems like the safest since it just spits out an answer, but what if it converted the universe to servers to be sure it had the right answer? (This is something you could tackle with domesticity motivation, but it's never really safe.) A sovereign seems most obviously dangerous, but the line between genie and sovereign is blurry. And the only reason today's tools aren't dangerous is that they aren't capable of provoking an existential threat - they mess up plenty, it's just usually not very consequential. 

Chapter 11: Multipolar Scenarios

This chapter was simultaneously illuminating and so dark (ha). It talked about life for humans and human brain emulations in an AI world, comparing the fall in demand for human labour with the fall in demand for horses between 1900 and 1950. It talked about the Malthusian principle, in which a population grows until all members are eking out miserable subsistence lives on the currently available resources, then the pressure is released by mass death. That's not even the dark part; here, quoted, is the dark part. 

"Life for biological humans in a post-transition [AI transition] Malthusian state need not resemble any of the historical states of man (as hunter-gatherer, farmer or office worker). Instead, the majority of humans in this scenario might be idle rentiers who eke out a marginal living on their savings. They would be very poor, yet derive what little income they have from savings or state subsidies. They would live in a world with extremely advanced technology, including not only superintelligent machines but also anti-aging medicine, virtual reality, and various enhancement technologies and pleasure drugs, yet these might be generally unaffordable. Perhaps instead of using enhancement medicine, they would take drugs to stunt their growth and slow their metabolism in order to reduce their cost of living (fast-burners being unable to survive at the gradually declining subsistence income). As our numbers increase and our average income declines further, we might degenerate into whatever minimal structure still qualifies to receive a pension --perhaps minimally conscious brains in vats, oxygenized and nourished by machines, slowly saving up enough money to reproduce by having a robot technician develop a clone of them."

Now, is this speculative? Absolutely. But it's plausible if we're not careful with AI, and even if we are. 

I'm not even going to transcribe the next paragraph, which is headed "Voluntary slavery, casual death". 

Chapter 12: Acquiring values

This chapter discusses eight ways of loading values into a system before it becomes superintelligent and is out of our control: explicit representation (implausible because humanity can't even describe our full values in words, never mind in code), evolutionary selection (bad because leads to mind crime, plus if systems were evaluated by running them bad systems could escape, plus the evolutionary selection might produce something that fits our formal parameter of success but not what we meant), reinforcement learning (inadequate because the system's final goal would be a reward, which when superintelligent it would just get via wireheading, i.e shortcircuiting and stimulating its own reward center), motivational scaffolding (come up with some high-level motivation now that makes the AI want to come up with a better one that's still congruent with human values when it's more intelligent), value learning (like a child), emulation modulation (influence emulations using digital drugs), and institution design (social control). 

Chapter 13: Choosing the criteria for choosing

This was interesting. So, not only do we not know how to install our values into an AI, we don't even know what our values are. So we need indirect normativity - letting the AI decide, but somehow arranging it so that it decides something in our interests. 

Eliezer Yudkowsky, AI researcher and author of my beloved Harry Potter and the Methods of Rationality, proposed Coherent Extrapolating Volition (CEV), phrased poetically as follows:

Our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together, where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere, extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.

Obviously, this is not something translatable into code. But it's a nice summary of the idea. The chapter also discussed morality models like moral realism (do the most moral thing -- but what if moral realism doesn't exist?) and moral permissibility (don't do something morally impermissible; out of the morally permissible things, do whatever most aligns with our CEV). 

A Tidbit

I can't find the chapter this was from, but something that really wowed me was the ingenuity of even today's "dumb" computers. Using evolutionary algorithms has come up with some pretty incredible solutions and shown the extent to which software thinks outside the boxes we've set. 


"A search process, tasked with creating an oscillator, was deprived of a seemingly even more indispensible component, the capacitor. When the algorithm presented its successful solution, the researchers examined it and at first concluded that it “should not work.” Upon more careful examination, they discovered that the algorithm had, MacGyver-like, reconfigured its sensor-less motherboard into a makeshift radio receiver, using the printed circuit board tracks as an aerial to pick up signals generated by personal computers that happened to be situated nearby in the laboratory. The circuit amplified this signal to produce the desired oscillating output.
In other experiments, evolutionary algorithms designed circuits that sensed whether the motherboard was being monitored with an oscilloscope or whether a soldering iron was connected to the lab’s common power supply."

Crazy, right?! Anyway, that's the end of the discussion. It's definitely a mind-stretching book and a lot to take in, but I think it's pretty cool. Before I read it, I just thought AI was awesome and exciting -- now I see that it really need more caution and am a lot less eager to rush it. 

As always, if you have any thoughts on the book or the post, shove 'em down in the comments below! 

Wednesday, 4 January 2017

Review: Bad Pharma by Ben Goldacre

Hey guys! A few years ago, I read and loved Bad Science by Ben Goldacre -- it's actually partly what led to my love for experimental design and the scientific method. Bad Science featured debunkings of the MMR vaccine-autism scam, TV nutritionists, detoxes and homeopathy, and it was great fun. 

So when I saw Bad Pharma on the shelf in my local Waterstones, I knew immediately that I wanted it. Unfortunately, I didn't enjoy it quite as much.

Bad Science was a 5-star book, but Bad Pharma only 3.5. Here's why.



Bad Pharma discusses all the ways medicine is flawed by the pharmaceutical industry and regulators and governments and bad research practices. 

Honestly, I think the title is largely a marketing ploy, because the book is half about pharma companies and half about other reasons medicine is broken. 

I wasn't particularly interested in some pharma witch-hunt; while I was unpleasantly surprised by some of the pharma shenanigans (shenanigans, I should say, which cost lives) with drugs like paroxetine (which can increase suicidal ideation in children, yet the drug company was cool with it being prescribed for children off-label when it only had a license for adults), my interest is really in trial design and science about science.

So I read the stories of bad behaviour by the pharmaceutical industry with some interest, and the stories of regulators' failures to regulate with exasperation, but I was most interested in the positive suggestions Goldacre had for better practice, and his discussions of trial design. 

I liked Goldacre's mix of reliance on systematic reviews and on his own experience as a doctor and popular science columnist, and I also liked how he didn't sensationalise things: he said early on (paraphrased) "the pharmaceutical industry is not hiding the cure for cancer - the real harm here is a lot more subtle but still kills people". So at least, apart from the title, he's not just selling books to conspiracy theorists using sensationalism.

The book got a bit boring as he comprehensively laid out all the ways in which doctors can be influenced by pharma companies' marketing departments, from ghostwriters of scientific papers to branded freebies to Continuing Medical Education seminars funded by pharma companies. That said, it was interesting to see that there is evidence that doctors who took money from drug companies or partook in this kind of thing were more likely to prescribe the company's drug and thus potentially harm patients by not giving them the best treatment. And I suppose the boringness and banality is an indication of just how pervasive the problem is and how hard it is to tackle -- it's just business as usual.

Anyway, my favourite parts were definitely the parts of the book about trial design. I adored learning about forest plots, two of which you can see below. These have a dot for the main result of each trial in the area, with the line representing the error bar. Dots on the left show the new treatment is better while dots on the right show the placebo or existing treatment is better, and if the line touches the vertical central line then the results are not statistically significant. The circle on the bottom shows the total. On the right there's a cumulative version of the graph, showing what it would look like if a meta-analysis was done after each new study, so the error bars get smaller and smaller and we see the progressing state of knowledge in the field. 




I liked the Bad Trials section, which talked about problems in trials from straight-up fraud to tricks like stopping a trial early or only testing "ideal" patients (ignoring the actual patients who'll be taking the drug but wouldn't qualify for a trial) or comparing your new treatment to something like placebo instead of the best currently available treatment (it doesn't mean much to say your treatment is better than nothing when the patient is choosing between drugs, not between your treatment and nothing, most of the time), or trials that measure surrogate outcomes (like cholesterol) instead of real-world outcomes (like heart attacks), or trials that say they're going to measure something and then end up measuring something else because that makes the results look better. 

I also liked the Bigger, Simpler Trials section, which describes something Goldacre himself is involved in, in which GPs with patients with a disease for which there are multiple treatments and no one knows for sure which is better can press a button to have the prescription randomised, and then feed the information on how the patient is doing back into the system -- essentially a huge, automatic trial, at no extra cost or danger to health. This works in the UK because of their huge patient database and centralised medical system. I think it's a cool idea.

His section on Missing Data was cool too, on how negative results usually aren't published. This was especially interesting because it corrected a misconception of mine --  I assumed this was because journals weren't accepting these papers, but Goldacre showed that, on the whole, they were. The big problem with negative results not being published is that it pollutes the field; even with a systematic review/Cochrane review or forest plot, the benefits of the treatment will be overstated, putting patients at risk. I liked the idea of preregistering trials and was dismayed by hearing about failed attempts at this, like when journal editors said they'd only publish preregistered trials and then didn't do that, and how the European medicines regulatory authority keeps a register for transparency and then refuses to publish any of it (at least until 2012). 

Something a bit annoying was how the cover said "...and how we can fix it", and Goldacre kept implying that you/we would be able to do something about this, but then most of the suggestions at the end of the chapter were only relevant to doctors or regulators or drug reps or medical students. Patients and citizens mostly only had the option to lobby. Now, that's fine! That's realistic, and I have no problem with it -- but it's a little disingenuous of the marketing department to pitch it as something laypeople have total control over, and then have the inside of the book say differently. It's probably not something Goldacre had control over -- this marketing problem annoys me with lots of books. 

The main reason I'm knocking off 1.5 stars is that I didn't like how hard I was being pushed and expected to feel angry -- even the front cover says "This is a book to make you enraged" - Daily Telegraph. Yes, absolutely, there is dirty work going on -- but the book was very obvious and consciously manufacturing outrage, and whether or not outrage is merited I would have preferred if the facts and analysis were presented to me, just without the "this is how you should feel after reading". 

In short: I loved the parts about trial design and was somewhat alarmed by the many failings of the industry -- I just could've done without the witch-hunt vibe.