Ethics in Science

So, right now the UK is in a big uproar about ethics in science. There have been parliamentary hearings which have deeply concerned scientists. In one opinion piece from the guardian the author argues that it’s been too long going that the scientific community has been able to function without some sort of regulation. Scientists of course object to this. Because there is a method to the manner in which they work. Many, from the tone at the hearings, feel this is another assault on the scientific community.

However, it maybe that there’s some scientific work that is more likely to have fraudulent activity in it. Today the Guardian published an article about scientific ghost writers. Scientific Ghost writers can come in two forms. The first is harmless where the author is really the person that got the funding. Depending on the journal these authors are either the second or very last author on the paper. This is normal, as typically you’re working in that person’s lab and they are paying you. So they should get some credit for the work done as they may also have had an advising role. The second kind of ghost writing is much worse. These writers were in no way associated with the research and their names are put on the article to give it weight, or if they were the ones supposed to be doing the research and some one else did it. In the Guardian article they are focusing on clinical trials for medicines.

This isn’t the only country where fraud, exaggerating claims or ghost writing occurs. Although, the UK has had one of the most famous cases with the retracted article linking MMR vaccine to Autism (meaning it was fraud). This also happens in the US and in many clinical trials. In fact a Greek doctor has made it his mission to unearth clinical trial fraud and really understand what was going on there. The Atlantic had a great write up about this in November of 2010. The doctor  Ioannidis has been making a career out of debunking claims as well as researching the causes of these problems. He argues that the double blind clinical trial isn’t giving us the best results we could possibly be getting in medical science. Although, he doesn’t offer a huge amount of alternatives. 

The New York Times also ran a story about in September of last 2010 about some of the ethics behind clinical trials. This article discusses how two cousins ended up in the same trial and one cousin was given the treatment and the other was not. It was a story that was really questioning the ethics of the clinical trial, because it was obviously working. However, pushing through these treatments without fulling testing them can be just as dangerous. Granted these people were near the end as it was. The cousin that didn’t receive the new treatment died from only getting the chemo.

One the one hand we want to get promising medicine out as fast as possible. However, we want to ensure we are properly testing these medicines to ensure safety. This leads to a great deal of ethical concerns. For promising medicines do we make exceptions? Do we allow fully untested medicine into the wild? These are difficult questions. From an ethical and moral standpoint allowing a patient to die because of a randomized test is very questionable, which is what happened in the case above. However, in some cases rushing through medicines like these end up causing deaths in other manners. In the case of Vioxx this is exactly what happened. In many people it reduced the risk while in others it out right killed them. Where is the balance? I think this is why the UK is pushing for more oversight in these cases.

*Note: my dad, a nurse practitioner pointed out that i was slightly wrong about Vioxx. He’s correct. There were more ethical problems than the fact it was a bad drug. Simply the creators of Vioxx hid the fact that it impacted african americans differently than white americans. If Vioxx hadn’t done this it wouldn’t have been a problem for the drug to stay on the market. If you want to read more about Vioxx there’s a chapter in the book Denialism By Michael Specter

In my next blog I’ll discuss scientific fraud and ethics in other fields.

Software Patents are the new Copyright

In one of my previous posts I commented that I was seeing a convergence withing copyright activities. I believe that something just as horrible is starting to happen within the software patent world. I think that it will threaten the free software movement as well. We’ve had patent trolls around for a long time now. Almost since the first patent was created, however, this didn’t interact with our daily lives. It was similar to the way that copyright didn’t affect you and me on a daily basis. Sure, changes in prices or the removal of a product could affect us, but typically we were able to find a replacement or dealt with the price change. However, I think that this new type of patent troll is more dangerous. Yesterday I saw a post on Ars Technica discussing how Lodsys is going after Apple app developers. Apple isn’t happy about this at all, because it threatens to ruin the base they have developed.

I think there are some other problems with this as well. Historically, if a company, that produces software, was looking to go for an IPO or bought by another company there’s a thing called due dilligence, where the products are checked for stolen code. This is a big deal, because if I stole the code from Linux or some other open source software, my entire project falls under the GPL, and forces my source code to become open as well. This can create massive headaches for companies.

There is a key difference between what used to happen in the past and what is happening now. Before it was the method of making something happened that mattered. For example if I took a really fast way to sort something from open source how it was sorted was what mattered, not that it sorted. Why does this matter? Well the code is also technically copyrighted and owned by the writer. Now the outcome matters as well. What if some one had a patent on sorting. I’ve mentioned how crazy this would have been in the past and how this would impact innovation.

Let’s say some one decided to put in for a patent on shooting animals at some sort of target through a controlled interface. Once the animal hit the target the animal interacted with the target which changed the user interface to indicate that the change had occurred. I have two games on my phone right now, Angry Birds and Monkey Blaster that would both be impacted by this patent. Both of them have very different goals and methods for shooting an animal at a target and different results once it hits the target. Indeed, the definition of target is different between these two games. However, neither of these developers are going to be looking for patents when they have an idea about what’s the next game they want to make.

The patent that is mentioned in the Ars article is absurd. It should never have been approved. There’s nothing novel in the development of the in app purchase. That is something that should be obvious from any one in the computer industry. You could easily see the relationship between a website and an application. In fact, I’m sure that there have been cases of this in the past. Another question that remains to be seen is this going to impact services like Steam? The article notes that Lodsys has already gone after EA.

This change in behavior towards apps and software patents is a very bad change. We need to work to address these types of problems. Returning to the requirement of producing a product to have on the market within a certain number of years could help address these problems. However for software this will likely just lead to a crappy product put on the market that no one buys and no one knows about.

Networking and knowledge flows

We hear on a daily basis about how important social networks are, either social or professional. I have to agree, they are extremely important, however, not all of us are actually good at actively engaging in expanding their personal networks. I’m personally terrible at it, although I think this may be a problem for me going forward the next few years. I plan on getting into science and technology policy, if it wasn’t pretty clear based on my writings here. So, having a broad network is important. I will need to keep up with technological, scientific and political advances (although in the US regressions may be more apt).

I just finished reading The New Argonauts by AnnaLee Saxenian, which really pointed out the power of networks. It’s a pretty rosy interpretation of the benefits of networking for Taiwanese, Chinese and Indian entrepreneurs that had decided to move back home after working in Silicon Valley. It’s a much better representation than Thomas Freedman’s World is flat, which is just ridiculously overly the top optimistic.

There are a lot of theories about how networks operate and what type of network you want to have. What do you mean type of network? Well, I’m sure you can think of different types of networks that you have. You have close friends that you are around all the time, and then you have co-workers that you interact with in a different manner. Some of them you let into your social network, others you keep within you professional network. Now within those networks they could be structured very differently. At work you could have a lot of contacts in many departments and interact with them to get the best information about how to get a job done or that person is your go to for getting stuff done for you. This was how my network was at when I worked at SAS. I had to have many contacts in different departments. This was different from some of my colleagues  who only worked within a department and didn’t have much external exposure. You would have to make an effort to change your network type.

As I mentioned above networking is good for information. This is also the case in the scientific community. Saxenian focuses on technological knowledge flows in her book. She looks at the locations of firms and how they interact with both halves of their network. Two halves? Yep, one in Taiwan and one in Silicon Valley. These Argonauts were bridges between the two regions. This has allowed Taiwan to become a leader in computing because of this.

You are also able to use social networks to identify people. This was an assignment for one of my classes, which had three different class codes. We were given our class network data, from a survey, and we had to attempt to reconstruct our class networks. As you can see below there was some clustering going on, with some people acting as bridges from one part of the network to the next.  The points that bridge the networks are good points for knowledge to flow from one part to the next. These are the people that are always good to have contact with.

Three major clusters roughly correspond to different courses

Further Reading:
Saxenian, A. (2006) The New Argonauts Harvard University Press, Cambridge Massachusetts

EFF’s Tor challenge and Internet Freedom

First of all, no I didn’t participate in the Tor challenge. I don’t feel I can use my computer in this way while I’m doing a lot of work on it for school. However, I think the idea is excellent. I didn’t explain what TOR is did I? Well here’s the EFF website about Tor. TL;DR: basically it provides a way for You, to hide your actual IP address. You have to install a piece of software to access the network. Once you access the network you’re data will bounce around and come out an exit point, which is your “final” IP address. This final address will take the brunt of any legal or illegal activity being conducted on the TOR network. The EFF suggest that you do not run an exit relay out of your home and the Tor project has some recommendations on running an exit point. However, it should be safe to run a middle relay to allow traffic to flow through your home address. The data that flows between middle nodes is encrypted. See the picture below.

EFF representation of the Tor network: from Tor Project

Why is this technology important? This helps with freedom of speech. The US constitution allows free speech and this is an important tool in allowing freedom of speech. Of course like any proxy website, or VPN it can be used for other purposes, as can the ideas of free speech. We may not like what it is being used for, what is being said or why, but it’s still legal. One thing that is noted repeatedly on both the EFF and Tor page is the risk of DCMA take downs and law enforcement attention. Both of these have a chilling affect on freedom of speech.

It seems to me that copyright control and protection may seriously damage a project like this. If all the exit nodes are shut down because of copyright take down notices we lose a valuable tool in preserving our freedom of speech as well as an assumed right to use the internet in the way we feel is best.

Another concern I have about this technology is the obvious potential use by hackers. This tool is going to be used by hackers. It would be foolish for them not to. This of course puts this technology at odds with the wishes of the government to control copyright infringement and prevent hacking of businesses and government agencies. I seriously hope that the US government, and the EU, gives protection to the exit nodes from legal repercussions from hackers using these networks. Used in the right way Tor could be a modern Underground Railroad for dissenters in countries like Libya, Yemen, and Saudi Arabia.

Aaron Swartz and Freedom of Knowledge

Aaron Swartz has been arrested and accused of a multitude of crimes, for a break down of them go here, for gaming a big journal retrieval site called JSTOR (it is a large one many journals are stored within this site). As some one that works with these retrieval services quiet often and has actually hit the limit for the amount of citation data you can pull from them, they can be frustrating. Some of the work I’m personally doing right now is related to citation analysis and co-authorship analysis. Which allows networks of knowledge flows to be seen. Another method is to do a word analysis within articles to create knowledge networks based on what articles are about, what knowledge is contained in each of the articles. Apparently, in the past, Swartz has done something like this. Some of my colleagues also use techniques to allow additional gathering of information. Most of this information, even with you have legal access, is difficult and very time consuming to procure. In this case, Swartz has access and may have been able to get a hold of this data through other means. JSTOR mentioned in one of their releases that they have a program that allows for high volume access to their publications. 

This case also has made me think of a few other issues with our current knowledge retrieval systems and repositories. Companies need to make money off these publications, so we can’t have them for free. However, through my research, I’ve used articles that are 20 years old. If this knowledge was patented, I would be able to access this and use it with no problem at this point. In many cases, it could happen sooner as many patents aren’t renewed after a certain time frame. Using a scientific article is typically more like using something published under a creative commons license, which means you can remix the information. Through citations you give credit where it is due. In most cases you can get access to the data and models, if you give the person credit, either through citations or co-authorship. Why does this work? Because the research is publicly funded.
Authors can also pay to allow full free access to their work depending on the journal. However, in most cases they don’t, or don’t get the article to be free continuously. However, there is some relief from the burden of paying for individual articles, Google Scholar, is able to find articles that scientists have on their personal websites, and allow access to “working paper” versions, which means they aren’t quite publishable yet, even after they have been published. 
I think for publicly funded research we need to have an exception to the copyright law, which changes it from 70 years to 10 years. Depending on the field even 10 years is to long. The work my wife is doing articles cited which are that old are typically cited because it’s giving credit to trail blazers. These papers are typically cited in the hundreds compared to the average of the tens. Once the copyright expired there would be much more competition for distribution of the articles and reduces the risk to the knowledge community if any given retrieval system or journal fails.
This Swartz case scares me in general, because it will make it even more difficult to access information and care a large risk if you create scripts to make it easier to get access to massive amounts of data.