Is AI going to kill or us bore us to death?

The interwebs are split over the question of if AI is going to evolve into brutal killing machines or if AI will simply be just another tool we use. This isn’t a debate being asked by average Joes like you and me, it’s being asked by some pretty big intellectuals. Elon Musk thinks that dealing with AI is like summoning demons, while techno-optimist Kevin Kelly thinks that AI is only ever going to be a tool and never anything more than that, and finally you haveĀ Erik Brynjolfsson an MIT Professor that believes that AI will supplant humanity in many activities but the best results will come with a hybrid approach (Kevin Kelly does use this argument at length in his article).

Personally I think a lot of Kevin Kelly’s position is extremely naive. Believing that AI will ONLY be something that’s boring and never something that can put us at risk is frankly short sighted. Considering that Samsung, yes the company that makes your cell phone, developed a machine gun sentry that could tell the difference between a man and a tree back in 2006. In the intervening 8 years, it’s likely that Samsung has continued to advance this capability. It’s in their national interest as they deployed these sentries at the demilitarized zone between North and South Korea. Furthermore, with drones it’s only a matter of time that we will deploy an AI that will make many of the decisions between bombing and not bombing a given target. Current we have a heuristic, there’s no reason why that couldn’t be developed into a learning heuristic for a piece of software. This software doesn’t have to even be in the driver’s seat at first. It could provide recommendations to the drone pilot and learn from the choices when it is overridden and when it is not. Actually, the pilot doesn’t even have to know what the AI is recommending and the AI could still learn from the pilot’s choices.

AI isn’t going to be some isolated tool, it’s going to be developed in many different circumstances concurrently by many organizations with many different goals. Sure Google’s might be to find better search, but they also acquired Boston Dynamics which has done some interesting work in robotics. They are also working on developing driverless cars, which will need an AI. What’s to say that the driverless AI couldn’t be co-opted by the government and combined with the AI of the drone pilot to drop bombs or to “suicide” whenever it reaches a specific location. These AIs could be completely isolated from each other but still have the capabilities to be totally devastating. What happens when they are combined? They could at some point through a programmer decision or through an intentional attack on Google’s systems. These are the risks of fully autonomous units.

We don’t fully understand how AI will evolve as it learns more. Machine learning is a bit of a Pandora’s box. It is likely that there will be many unintended consequences, similarly to almost any sort of new technology that’s introduced. However, the ramifications could be significantly worse as the AI could have control over many different systems.

It’s likely that both Kevin Kelly and Elon Musk are wrong. However, we should assume that Musk is right while Kelly is wrong. Not because I want Kelly to be wrong and Musk to be right, but because we don’t understand complex systems very well. They very quickly get beyond our capability to understand what’s going on. Think of the stock market. We don’t really know how it will respond to a given quarterly earnings from a company or even across a sector. There are flash crashes and will continue to be as we do not have a strong set of controls over the high frequency traders. If this is extended across a system that has the capability to kill or intentionally damage our economy, we simply couldn’t manage it before it causes catastrophic damage. Therefore, we must intentionally design in fail safes and other control mechanisms to ensure these things do not happen.

We must assume the worst, but rather than hope for the best, we should develop a set of design rules for AI that all programmers must adhere to, to ensure we do not summon those demons.

Is Scientism the problem?

I just finished reading an article in The New Republic which argues that history and the humanities are knowledge too. At times it felt like the author was yelling at his brother begging to be noticed. Personally, I feel that in general the author is correct, that history and humanities do plan an important role and can be considered as knowledge. However, the author makes one glaring mistake, he is equating the unified theories of everything in physics with everything, where it typically means a combination of all physical laws within physics both particle and cosmic, which would then move into chemistry and likely into biology. However, this type of theory of everything would stop there. It couldn’t really combine natural selection as functions of chemicals in a specific manager do not necessarily mean a truer understanding of evolution. It would be able to explain how phenotypes are changed with genotypes, but not why one genotype/phenotype pair was selected over another without an understanding of the specifics of the environments at a time. A true theory of everything at that level would essentially be a simulation of the universe. It would be impossible to model in a series of equations beyond the fundamental laws of physics.

For the evolution of biological systems you have to understand the natural history of the world that the organisms develop and evolve. This is why when you read Sagan, Dawkins or any other biologists or cosmologist they argue that if you rewound the tape of history you’d get a different present day. Some things may have happened just slightly different enough and you’d have no humans. The understanding of the history of our world allows us to understand where the future of it is going.

In the same way, history does matter. There are branches of economics, such as evolutionary economics that use complexity models and work to ensure that the history of events are included in their models. What the major difference between typical theories of history and psychology and newer models of economics and complex systems of physics, is that we’re able to test them using simulations. It is likely that in the future we’ll be able to do the same thing with history. This will give us a deeper understanding of why our societies have developed as they have. One heavily contested aspect of evolution, which is mentioned in the article, is cultural inheritance, which is where the theory of memes came from. This approach doesn’t suggest one type of people is better than another or one lifestyle is better than another, it simply says that in the environment that the culture resides it’s more capable of surviving than others. This can go down deeper to smaller niches within the culture and how well they adapt to their environment.

Other aspects the author argues discusses is the differences in the acceptability (or perhaps the perception) of radical paradigm shifts in science compared to the humanities and history. He mentioned specifically Freud in psychology and Galileo in physics. He argues that Galileo was able to make changes in physics because he tackled an “easy” problem that had minimal level of complexity. He went after the theory of gravity and how objects fall at the same rate while Freud went after the entirety of the human psyche. I agree there is a difference of complexity, however the key differences between Galileo and Freud is that he was better able to explain the state of the world and when new scientific theories were produced they continued to explain what Galileo found but with more accuracy and expanded on them. When Freud was discredited it was more like discrediting Alchemy than going from Newtonian physics to Relativistic physics.

The key difference between many theories in humanities and in the rest of science is the lack of continuum between two major theories. Yes, Relativistic physics completely obliterated the value of Newtonian physics and created a new world (universe) view, but it solved the same problems or proved that many of the old problems were only problems because the theory wasn’t complete enough.

The key that needs to be remembered in either science or humanities is that all models are wrong, but some are useful. Freud was wrong in how he looked at the human psyche, but his models allowed other theories to be tested and used and likely spawned Neuroscience and the bridging between neuroscience and many of psychological problems.

The difficulty of science

As reported in Science Insider yesterday, apparently the faster than light neutrinos may have been caused by a loose fiber optics cable. To me this also begs the question, were other results impacted by this loose fiber opitc cable?

This is where the difficulty in science lies. First, CERN had to admit that there was a faulty detector which could have caused the result invalidating what likely was the greatest finding in physics recently. Secondly, they are going to have to run the same tests again to make sure the results were bad. Finally, a bunch of other locations invested in their capabilities and will be able to test the results for themselves too. I think the last two are important. At one point Fermi lab indicated that they had seen faster than light neutrinos but it was beyond their capabilities to reach the required level of statistical significance.

I think that this does show an important factor within science. First, scientists have the ability to referee themselves on important earth (speed of light) shattering results. It indicates that the system works. Secondly, it shows there is integrity in scientists, as something like this essentially will make careers and set this group up for the rest of their lives somewhere. As they admitted what caused the error and are working to correct it in testing, it indicates they care more about the results than about their career. Although, lying about this after finding it would have ruined their careers just as quickly.

Why is that important though? Let’s take a step back and look at the bigger picture. Most scientists are trained in a very similar fashion. You are taught the basics during high school, moving to more advanced topics in college and finally many become experts by pursuing a PhD. All are taught about the idea of falsifiablity of hypothesis and theories as the cornerstone for scientific progress. Of course there are debates of if this is how things actually work in science, but typically it is. There are points where a major shift in scientific discourse but this can take a long time and must answer questions of the previous scientific perspective and answer questions the other perspective could not. A perfect example of this is Newtonian physics and Relativistic physics. Newtonian physics gives you Force = Mass x Acceleration, it’s not really fully accurate, but it works well enough for daily activity. Under certain circumstances it’s simply wrong. That’s where Einstein came in and fixed it. It took a while for the shift of acceptance for this theory, but it’s now the prevailing theory.

From a scientists point of view their incentives are oriented towards yearly output of papers that are accepted into high quality peer reviewed journals, such as Science and Nature and whatever is the best in their field. There are no incentives for making hoax theories. They would lose funding and eventually be jobless.

I think that this error at CERN can bring that into the discourse over topics such as evolution and climate change. It’s indicative of the ethics that prevail in science today and that when theories are wrong work is done to find out why or how. Once that has been answered, new theories are suggested and eventually accepted. Understanding how this works will make topics like climate change and evolution less threatening.

Frivolous Science? Pfft

Today I saw this post on Reddit. Long story short this guy was asking the r/askscience subreddit why we do research like the CERN experiments, as it has no practical use. There are several reasons. I’ve mentioned some of these on here before, but they can always be mentioned again. First, research that we conduct now that is interesting only to a small subset of people may be applied for other things later. Second,  furthering our understanding of the world isn’t frivolous. Third, in many cases basic research must be completed at universities because industry will not pay for it.

Some examples, bird migration research that told us a lot about birds historically probably wasn’t very interesting to much of the scientific community. However, it’s become more important of late. One of my friends commented to me about how in Europe during the Avian flu, migration patterns became extremely important for predicting where the next could be. There are further uses, those migration patterns are being used to determine where to place wind mills, because we don’t want to put a wind farm in the middle of a bird migration path. The slaughter would be horrifying. Finally, changes in migration patterns may represent a shift in local climates. If birds take longer to migrate south, it indicates that the weather isn’t changing as fast as it used to. Over time this data could indicate a trend and we should look for further evidence of climate change.

In 2009 there was a rash of articles that questioned the importance of scientific research in some cases. This isn’t really new, even at that point there’d been the infamous McCain bear comments. Even scientists make fun of some of the more obscure types of research with the Ig Noble Awards (One award was given to a research that only cracked the knuckles on one hand to test for arthritis differences (there wasn’t any)). Despite this, some of this research is interesting and could be useful in the future. Take the recent finding that fish are angry in boring fish tanks. This research is pretty much useless unless you’re a fish fan. However, it also shows us that we clearly don’t understand animals as well as we think we do. Even popular stories about the memory span of gold fish was shown to be wrong by the MythBusters. These examples indicate that many people don’t understand the importance of research and that even scientists don’t. However, even seemingly pointless research can illuminate our understanding of the world. People love to know nearly pointless facts. This also ties back to the my first point above, we never know when something seemingly useless can suddenly have an importance beyond the scope of the original study. It may save lives. That finding about fishes could help build better large scale aquariums where it is safer to interact with dangerous fish, like killer whales and sharks.

My final point is that some basic research will not be conducted by industry players. There’s no guarantee on any return on some scientific investigation. However, it can be incredibly important for the advancement of industry. Quantum computing could be the next big thing for computing, however it’s being researched by a combination of industry and universities. Most of the money and risk is on the university side though. Our understanding of particle physics helps us understand how quantum computing can help. Eventually we may be able to use this neutrino finding, if it pans out, in communication systems. There’s no reason why we wouldn’t be able to use the spin of a neutrino to transmit information.

Seemingly frivolous research is an important part of the scientific process. Enjoy it.

Ethics in Science III

I’ve been doing a series on Ethics in science, part one, part two, because there’s been a lot of public issues in the UK about the behavior of scientists. Any suggestions, or laws put into affect would have far reaching impacts. As any scientist in the UK would be required to follow them and any scientist that wishes to publish in a journal headquartered there. I believe Nature is. Nature is THE journal to get published in.

There are some different suggestions on what should be done, including ethics review boards and independent verification of results. The UK’s investigation of fraud led to this result:

In the same way that there is an external regulator overseeing health and safety, we consider that there should be an external regulator overseeing research integrity,” says the committee’s report. “We recommend that the government set out proposals on the scope and powers of such a regulator and consult with the research community and other relevant parties to develop them.

I understand what they are going for here. They want to prevent another vaccine debacle or prevent another cold fusion lie. I think they also plan to prevent another “Climate gate.” While these are noble causes, I can’t help but fear that politics will get involved in this process. If a scientist is found of committing true fraud their career is over. There just isn’t the right incentives to commit fraud in MOST sciences. Yes, it happens, but it’s more likely to be a mistake than true fraud. Which is something that peer review might catch. However, even this is difficult without the initial data set, or recordings of how the experiment was carried out. Scientists are pretty brutal when going through the peer review process. They question everything and you have to have a satisfactory answer to all their questions if you want the results to be published. The true best way to improve scientific debate is to provide incentives to publish articles that have debunked previous research. This will fix more problems than a regulatory board for most of the sciences.

However, then we come to medical sciences. Here there are much greater incentives to commit fraud or intentionally mislead. Why? Well, for a blockbuster drug they can sell Billions in revenue a year. If a drug company thinks that they have a blockbuster on their hands they will try to get it to market sooner. In most cases they have patent protection for at most 10 to 15 years. But you’ve said patents are for 20 years. That’s true, however, it typically takes drug companies 10 years to get a drug to market. After the last ten years they are able to request a 5 year extension.

Why is the system set up like this? Well, the drug companies test a lot of different drugs and not all of them can be blockbuster drugs. A lot of them don’t make it through the rigorous testing process either. The drug companies have to pay for all of that as well as make a profit. So, they charge a lot for these blockbuster drugs. They actually do have some different prices to try to help the poor out as well though.

So, in clinical trails there is more incentive to commit fraud or with hold important results. What can be done about it? Well Bernie Sanders (US Senator) has proposed a prize competition for developing different kinds of drugs, which as a stipulation of getting the prize the US government would own the patent. The government would license the patent out so drugs could be cheaper. However, this prize would have to be huge which would again provide more incentives to defraud the government. It would have to be in the billions to allow for the drug companies to recoup their expenses. It could force much stronger restrictions and oversight on the drug trials though. Which could reduce the ability to commit fraud. The prize committee could potentially be made up of scientists that are part of the NIH (National Institute of Health) which would do the data analysis for each of the “Blockbuster” trials thus forcing impartiality into clinical trials.

This could work. Additionally there could be sanctions put on the fraudulent authors, where they are unable to publish for a year, at any level. Where they lose their grants, or are unable to hire new graduate students until they show they have been reformed. This would certainly kill their career. However, this should happen.

Finally, I think that scientists should be required to add any conflicts of interest in the publications as well as sources of funding. In many cases this already happens as the funding agencies require it, however making it an explicit part of the publication process will make it more transparent. Transparency is vital to science.

Science isn’t perfect, but it’s our best tool for understanding the world around it. Committing fraud on the scientific community and the world as a whole is a horrible crime and should be treated as such.

Ethics in Science II

Yesterday I discussed some of the ethical concerns within the Medical science field. This case most likely has the most frequent cases of fraud and unethical behavior. Why? Because there’s a ton of money involved. Clinical trials relate to drugs, which is a multibillion dollar industry. Additionally, there is no requirement by the National Institute of Health to list any potential conflicts of interest. According to Nature there was a plan in the works to require this. However, it got scuttled. In business people go to jail for these types of things.

However, medical science is not the only place where fraud happens. As this ethic blog notes there are a lot of several different kinds of fraud. Some are intentional, others are less intentional. The biggest problem is intentional fraud. Where the author makes up some result. There are two pretty big examples of this. The first is the fake human clone from South Korea by a scientists named Dr. Hwang Woo Suk. This  guy was rather quickly outed as a fraud. However, this wasn’t until there was a HUGE debate in the mainstream media about the ethics of cloning human stem cells. This helped push the US and much of Europe to ban cloning of human embryos.

The second most famous case of fraud is the case of cold fusion. What is cold fusion though, why would people want to make claims of making that happen? Well, fusion is what the sun does, if we could manage to do that on earth without burning ourselves up that would be pretty awesome. Basically, as the PopSci article states, is that with fusion you get more energy than what you put into it. It basically would solve all world energy problems. The first person that does it would basically be a savior to the human race. So, it’s something that people really want to do. There’s debate if it’s even possible, it’s theoretically possible, but physically possible is still up for debate.

So, accidental fraud comes about from introducing a personal bias or from misinterpreting data. Both of these happen fairly often in science. Why? because we’re human, and this is what the scientific method is supposed to eliminate over time. Before publishing results you typically need to have been able to reproduce them and show that there is a trend that is consistent over time for the phenomena that you are studying. This is one of the biggest requirements for science. Which is why in clinical trials there are at least three stages to ensure repeatability of the data.

The other good thing about the scientific method is the fact that other people can take your results and findings and test them. IF the results are different they can be published and used to dispute the previous findings. This happens all the time in regular scientific discourse. In fact there’s a great example of this going on right now. This debate has been going on for about a hundred years now or so. Recently a group debunked Gould’s bias argument. Basically a guy back in the late 1800’s measured a big set of skulls to see if there were any size differences. Stephen Jay-Gould, basically the Richard Dawkins of his day, re-analyzed the data because he felt there was bias in it, and found that there was in fact bias! Well, this recent group actually remeasured the skulls and found out that it was Gould that was biased and that if anything the original sample was more correct.

Science is supposed to be totally objective. As we can see from this discussion it’s not, and cannot be. Why? We’re human. However, the system works really well as a whole. In my next blog I’ll discuss some of the ways we can address issues of fraud other concerns that I’ve mentioned over the past two days.

Ethics in Science

So, right now the UK is in a big uproar about ethics in science. There have been parliamentary hearings which have deeply concerned scientists. In one opinion piece from the guardian the author argues that it’s been too long going that the scientific community has been able to function without some sort of regulation. Scientists of course object to this. Because there is a method to the manner in which they work. Many, from the tone at the hearings, feel this is another assault on the scientific community.

However, it maybe that there’s some scientific work that is more likely to have fraudulent activity in it. Today the Guardian published an article about scientific ghost writers. Scientific Ghost writers can come in two forms. The first is harmless where the author is really the person that got the funding. Depending on the journal these authors are either the second or very last author on the paper. This is normal, as typically you’re working in that person’s lab and they are paying you. So they should get some credit for the work done as they may also have had an advising role. The second kind of ghost writing is much worse. These writers were in no way associated with the research and their names are put on the article to give it weight, or if they were the ones supposed to be doing the research and some one else did it. In the Guardian article they are focusing on clinical trials for medicines.

This isn’t the only country where fraud, exaggerating claims or ghost writing occurs. Although, the UK has had one of the most famous cases with the retracted article linking MMR vaccine to Autism (meaning it was fraud). This also happens in the US and in many clinical trials. In fact a Greek doctor has made it his mission to unearth clinical trial fraud and really understand what was going on there. The Atlantic had a great write up about this in November of 2010. The doctor  Ioannidis has been making a career out of debunking claims as well as researching the causes of these problems. He argues that the double blind clinical trial isn’t giving us the best results we could possibly be getting in medical science. Although, he doesn’t offer a huge amount of alternatives. 

The New York Times also ran a story about in September of last 2010 about some of the ethics behind clinical trials. This article discusses how two cousins ended up in the same trial and one cousin was given the treatment and the other was not. It was a story that was really questioning the ethics of the clinical trial, because it was obviously working. However, pushing through these treatments without fulling testing them can be just as dangerous. Granted these people were near the end as it was. The cousin that didn’t receive the new treatment died from only getting the chemo.

One the one hand we want to get promising medicine out as fast as possible. However, we want to ensure we are properly testing these medicines to ensure safety. This leads to a great deal of ethical concerns. For promising medicines do we make exceptions? Do we allow fully untested medicine into the wild? These are difficult questions. From an ethical and moral standpoint allowing a patient to die because of a randomized test is very questionable, which is what happened in the case above. However, in some cases rushing through medicines like these end up causing deaths in other manners. In the case of Vioxx this is exactly what happened. In many people it reduced the risk while in others it out right killed them. Where is the balance? I think this is why the UK is pushing for more oversight in these cases.

*Note: my dad, a nurse practitioner pointed out that i was slightly wrong about Vioxx. He’s correct. There were more ethical problems than the fact it was a bad drug. Simply the creators of Vioxx hid the fact that it impacted african americans differently than white americans. If Vioxx hadn’t done this it wouldn’t have been a problem for the drug to stay on the market. If you want to read more about Vioxx there’s a chapter in the book Denialism By Michael Specter

In my next blog I’ll discuss scientific fraud and ethics in other fields.