Saturday, 20 February 2016

Some Thoughts about the Future of Scientific Research

I've been thinking about what scientific research is going to look like in the next few decades, or more accurately what I think it should ideally look like. So here are some of those thoughts off the top of my head. Themes: automation, human error, AI, ethics.

1. I think the scientific endeavour is being seriously impeded by human error. Humans are very far from infallible: not only can we make physical mistakes like filling a flask past the mark due to momentarily shaky hands, we can also be emotional or tired from outside circumstances and thus misinterpret a protocol or fail to pay attention to some factor we should've. Or we might be lazy and not do the absolute best thing every time e.g. making up a fresh solution of some reagent. These are all normal parts of being human, but they lead to flaws in scientific knowledge and I don't subscribe to that idea that "the beauty of science is in its flaws". 

2. Lab work is often pretty damn boring. Obviously it depends on exactly what you're doing, but there's a lot of standing in sterile white environments pouring things and weighing things and peering into microscopes then sitting in front of a computer while it does some calculations, not to mention all the waiting. The exciting bit (at least for me) is the design and interpretation of the experiments, not actually carrying them out. (YMMV). It's really not where the wonder of discovery lies, so why have humans do it?

3. A little about the logistics of having robots carry out our science experiments. I think - since this is the future we're talking about - we could use something like 3D printing to digitally design the robot to our exact specifications for unique experiments, whereas for more common procedures generic robots could be mass-produced that have the hardware to perform, say, Western blots or serial dilutions. This would include basic stuff like adjustable grip (so the robots wouldn't crush delicate beakers/aliquots/whatever), much improved visual processing, cameras to record everything they were doing (for accountability - this would be good for human researchers too in the interim but they probably wouldn't consent - you don't have to get consent from a robot)... I'm sure you could actually extend this further and give the robots internal electronic balances etc. but I'll leave that for another time. 

The professor or whoever is running the experiment via robot would program the robot with exactly what they need to do. This would have an added benefit in that the experimenter would be forced to know exactly what they were talking about to phrase the instructions totally unambiguously, which would make it easier to eventually write up a paper. I'm sure programming's explosion in popularity would be sustained in this way, as robot-human liaison officers became useful. 

4. Up until this point, I've been talking about the menial parts of research, so that humans could remain useful in the creative parts of science, like designing and interpreting experiments. But as Artificial Intelligence develops, much of that too could be done by robots. After all, AIs and software are analytical by nature, and all they need is a detailed, accurate framework to know what to do. So if we programmed them with a great framework for good experimental design, they could be fed the details of each specific piece of research (hypothesis, variable(s) to be tested) and come up with controls and logistics. 

Of course, at least for a while there will still be stuff only humans can do - problems that are so unique they're simply outside the limits of the framework. But humans tend to just be working with (somewhat less rigid) frameworks too (aka models of the world), based on what they've understood from their education. So if an AI can't do something (science-related), chances are a human can't either - and AIs don't get tired or sentimentally attached to their ideas.

Being someone who designs and interprets experiments is what I want to do, so obviously this isn't a fun conclusion to think about. But it seems undeniably plausible.

 Objections:

1. Loss of jobs - this would make a huge number of people not in professorial roles redundant or almost so. The lab technicians (and technicians in other jobs) would hate it. Some jobs would be created in designing the hardware, programming the software and teaching people to communicate in a way software could understand, but there'd still be a net loss as far as I can see. 

2. Serendipitous discoveries - Some discoveries come from mistakes, like Fleming's accidental discovery of the lifesaving penicillin. The concern is that something unpredicted could happen during experiments that the robot has not been programmed to deal with and suddenly bam crash the robot freaks out and destroys the lab the robot doesn't record it because that's not in its brief and something valuable is lost to science forever. To avoid this, perhaps the robot could be programmed to monitor its environment constantly (as well as the camera recording footage to display later) and to stop if it spots an anomaly and alert the person running the experiment. This could get annoying with false alarms, but presumably after a while the researcher would figure out problem spots and either fix them or be on hand to respond to the robot's alerts. This is another place where the researcher would have to be very well-informed about their experiment ahead of time to compensate for software's difficulty dealing with ambiguity.

3. Need coding skills - one skillset (carrying out the manual parts of experiments) would suddenly become less useful than another (programming) so a lot of people would either have to reskill or become unemployed. At any rate, with the current push for programming being run by most of the world, there shouldn't be a shortage of programmers.

4. Can we trust the robots? With things like this, there's always that fear of the robot uprising. What if the AI becomes too intelligent and uncontrollable and is ordering us around? What if the robots become sentient and stage an uprising against being treated as slaves, taking umbrage to my point above that you don't need to get consent from robots? Does science lose some nobility if humans aren't physically putting their blood sweat and tears into it? How much of the work does the researcher own, and how much is owned by, say, the manufacturers and designers of the robot? To those last questions, I would say that the researcher still maintains ownership of the work but there is less work. But will there be a point where robots can own things, and will that include intellectual property?

No comments:

Post a Comment