More Megapixels, More Image Quality?

We have experienced the increase of the megapixels on digital cameras during the past few years, I still remember when 0.5 megapixels was the largest image size we could find meanwhile nowadays we can find cameras with 24.3 megapixels and the megapixels will continue to increase as the camera companies keep telling users that largest megapixels translate in better image quality. Personally, as an amateur photographer and researcher in the field of image processing, I think that most of the time an image with more than 6 megapixels is a waste of memory and camera resources.

Let me start explaining the reasoning camera makers use to convince user that more megapixels is better: Printing quality. As you know, a good printing quality is achieved when the printing resolution is equal or higher than 300 PPI (pixels per inch) and, therefore, if you want to print a large image with good quality you would need to have a large image, for example with a 2 megapixels image the largest print size at 300 PPI would be of 14.7 cm x 9.7 cm (5.8” x 3.8”). You can do the math yourself, but in the page of Imagine 123 you will find a table of the image size and printing sizes you may have. The camera makers tell users that with larger pixels they won’t just be able to print in larger format but also they will obtain more detailed photographs since you will have more pixels to represent the objects in the image. I don’t say this claim is completely false, but you need to consider other aspects that aren’t as straightforward as the concept “bigger is better” and this discussion has been in the air since some years ago as you can see in this cnet news note from 2007.
If we accept as a fact that most photography enthusiasts don’t print their photos in large format, then the camera makers just have the detail in the image as the only reason to offer users more and more megapixels every day. But, it is really true that more megapixels are synonym of more detail? My answer is yes for just few cases but most of the times is a big no. Let me explain you my reasons:

First we need to consider the sensor of a digital camera, it is an array of light sensitive elements and each pixel will correspond to a small area of the sensor, meaning that the information in each pixel is the sum of the light arriving through the lenses into the pixel area. Now, if we keep the size of the sensor constant and we increase the megapixels the resulting pixel size will be reduced and therefore less light will arrive to each pixel increasing the effects of electrical noise in the sensor degrading not just the sensitivity to finer tonal gradations but also the quality of the image in dim conditions. As an example, I took two different photographs using my camera with 6 megapixels (2816 x 2112 pixels) and a 7.18 mm sensor and one of the cameras of the HORUS system with just 1 megapixel (1024 x 768 pixels) but a 8 mm sensor, i.e., more than twice larger pixels. You can see how there is more noise in the image captured with the 6 megapixels camera despite the fact that there are more pixels to represent the same object. You can see the complete pictures in my blog.
My camera
HORUS system camera

The noise is not a problem in highly illuminated scenes, that’s one of the few cases were bigger is better, but for dim conditions the camera makers try to solve the problem using clever image processing methods, for example increasing the gain of the light sensor and using filtering algorithms to reduce the noise, most of the times reducing also the image size. As you can imagine, the image processing will end up with an altered image and for purists this could be a downside of using cameras with large megapixels.  
At the end, maybe professional photographers will fully exploit the advantages of large images, but we must keep in mind that the image quality is not completely determined by the megapixels of it, we also must take into account the camera’s optics (lenses) and especially the sensor’s size and sensitivity and, therefore, we shouldn’t trick ourselves into the “bigger is better” mantra of most of the camera makers and sellers.

If I made video games, this is how I’d deal with Piracy

Piracy is something of a real issue. It can impact the livelihoods of artists as well as the big companies. However, the methods that companies go to when fighting piracy are extreme and infuriate end users. The people that listen to music or play games for the love of music or video games.

My friends over at KMBOD have written in the past about how horrible some of the Digital Rights Management (DRM) systems are on video games. These systems require continual verification that the game has actually been purchased. In some cases it makes the game unplayable or extremely difficult to play. In some cases the user must be online the entire time regardless of the type of game the user is playing. It makes sense for the game to be online if you’re playing multiplayer games, but if you’re playing a single version of the game why would you need to be online? Why should the game suddenly crash if you get disconnected from the internet? These types of things anger the gaming community and drive them away from specific titles and potentially entire publishing companies. Some publishing companies are Electronic Arts and Valve.

I don’t think that DRM is the right system to use. For one it’s easy to get around if you really want to and many players kind of look at DRM as a challenge something they should get around and publish online as a community service. It’s not just video games that do this, but also DVDs, Blu Ray and CD’s. In fact in the US it’s illegal under the DMCA to circumvent DRM.

So what would I do instead? Since there are a fair number of pretty easy distribution channels for video games now. There’s Steam, EA’s origin and a few other ones that I’m not really aware of. There’s also buying it from Amazon, Best Buy, Game Stop and a bunch of other stores. So access to the game is pretty easy. Price might be an issue, but for good games people are willing to pay a premium, just look at the sales of Skyrim and Modern Warfare 3. Huge blockbuster games. These changes are mostly for First Person Shooters, but similar type changes could apply for other types of video games, such as RPGs or strategy games.

Despite the ease of access people still pirate because they want to try before they drop $60 on a game. So what I’d do is make it as easy as possible to access both legally and illegally. I fully believe in the try before you buy model. However, for copies that weren’t installed from a CD or downloaded from an online distributor like Steam the game quality would be diminished. For instance many gamers complain about the number of frames per second for a game. Video is shot at 60 fps and the human eye can’t see much faster than that, but we can tell the difference if it’s much slower than that amount. In the illegal versions I would make the game run at 30 fps, but it would initially start at the 60 fps and over the course of a minute or two and have a little note flash that if you buy the game you can get the full 60 fps.

Another feature that gamers complain about is the perspective within the game (field of view FOV). They describe it as feeling like your playing with your head in the monitor. basically it’s restriction on peripheral vision. Again I would start the game out with full vision and then slowly move the POV into the “monitor” restricting the view and giving the paying customers an advantage over the pirate customers.

I would also make the user do less damage than their paying counter parts. This would reduce the number of kills and make the player less effective on the playing field and more likely to die and less likely to kill. Finally, the last thing I would do is to have a little pirate flag next to any player that didn’t legally purchase the game so all of the other players would know when some one hadn’t bought the game. In games where kill counts matter this could cause users to be banned from servers and reduce the ease access for playing.

None of these things would ruin the game to the point that some one wouldn’t want to play it. What it would do though is push people towards paying to be able to compete at the same level as everyone else.

Billions and trillions

One of Carl Sagan’s books that I really like is “Billions and Billions”, where he wrote about the importance of exponentials, the connection between hunting and football, the true size of the universe, the decline of our planet, government and even abortion. Though I read it in English, I once, in a friend’s house, found a Spanish translation of the book and I was surprised when I realized the translated title: “Miles de Millones”, which means “Thousands of Millions”. If you are a native English speaker you might be thinking “Why were you surprised? A billion is a thousand millions, in other words it is 109”, and that is the main reason I decided to write about this because in most Spanish speaking countries the term “Billion” means a million of millions, i.e. 1012, and probably now you understand my surprise.
Historically, the term billion in English was first used to design 1012 following the French numbering system and it was introduced in the 15th century[1]. Now that meaning is part of the denominated long-scale system where a trillion is 1018, meanwhile in the short-scale system, used in most of the English speaking countries, a billion is 109 and a trillion is 1012. Surprisingly, the short-scale meaning was introduced also by France in the late 17th century even though they officially use the long-scale system nowadays. In the past, England used the long-scale system for a long time but they changed to the short-scale one, meaning that when reading old documents from England you must be careful about the meaning of billion and trillion.
If you are used to the exponential notation, then this whole discussion might be pointless since you use an unambiguous way to describe large quantities that doesn’t need the confusing terms billion and trillion. In that sense, the International Bureau of Weights and Measures (BIPM) suggests to avoid the use of billion or trillion since their meaning is language dependent and I think that scientists that publish or communicate their work should be aware of this language ambiguity and avoid it or at least be clear about the scale they use. As a recent example, we have the news about the MIT camera that is able to capture video at the speed of light, where they use in the title the sentence “one trillion frames per second” and they even use the word trillion over all the official website of the project, I couldn’t find a footnote or an explanation of the scale they are using and, therefore, after my first excitement about having a camera capturing data at 1018frames per second I had to use my common sense to realize that they are talking of 1012 frames per second since their results have time lengths of nanoseconds (10-9 seconds) and hundreds of picoseconds (100 times 10-12seconds). I’m not saying that their results lost importance because the camera works just at 1012 fps, that’s still very impressive if we take into account that most of the video cameras we had commercially don’t go further than 30 or 60 fps and that the fastest video camera I have worked with has a maximum frame rate of 1000 fps. I’m just saying that at first I imagined the amount of data captured and the transfer and storage capacities needed to work with it but later everything looked a little bit smaller because my reference frame was using the large-scale system.
In a globalized world, where communication between people from different countries and languages is a common thing, we need to have standards to communicate our ideas unambiguously and we must try to allow everyone to fully understand the information we are sharing with them, even though their common sense should be enough for them to understand us. Since there is not a chance that we have an standard meaning for billion and trillion in the world, I invite everyone to avoid their use or at least to give an explanation of the meaning of those words in their work.


[1]Smith, David Eugene. History of Mathematics. Courier Dover Publications. pp. 84–86. ISBN 978-0486204307.

Are patents going to impact how doctors treat a patient?

Today Ars Technica reported on a case before the US Supreme Court and how the court is assuming that the usage of scientific data, which has been publish, is a valid patent. This is a pretty scary scenario. What do you mean? Well, the patent is related to how the levels of some chemical impact the dosage of a drug. That’s it. If you have level X in your blood you should have dosage Y. The patent holder created a device to test the level of the chemical in your blood which then suggests a dosage level. The Mayo clinic developed their own test and  have been administering the test on their own without paying anything to the company. The arguments in the court essentially assume that this is a valid patent.

Should this patent be valid though? Seems like something that could be patented. Based on what is considered patentable, this should fall under mathematical formulas. Essentially, this is a matter of correlation and basic regression analysis. During a drug trial you can determine a correlation between the impact of a dosage of a drug on the current level resulting in a lower level of the chemical. This is really how all medicine works. If you can reduce costs by creating your own tests and administering it yourself then that’s great. Hospitals should be encouraged to do this if they are large enough.

This is what Doctors do. They read literature about the medicine the condition it’s supposed to impact and what sort of connection there is with the dosage levels and the response rate within the patients. Every doctor has to use a test to determine the level of a chemical or some condition. This can be the pulse (irregular heartbeats), blood pressure (pressure cuffs), blood sugar (A1 test) and the list goes on. In each case the doctor is able to assign a proper dosage prescription based on the study of patients. If a doctor was required to pay a licensing fee for each and every case of this our currently exorbitant costs of health care will seem cheap. Like when we used to complain about $1.50/gallon for gas.

The other problem with patenting something like this is that it’s likely to be highly unenforceable except for when a large institution like the Mayo Clinic. Individual practitioners will be safer than large clinics, but they could be impacted as well. If they are required to use an extremely expensive proprietary testing methodology rather than have the ability to use any testing method it will drive up prices and may put doctors out of business.

If the court rules on this as if these types of patents are valid, we will need to push to have patent law changed again. The last change moved things in general, in the right direction but a lot more work needs to be done.

Data protection, anonymity and copyright

I talk a great deal on this blog about data issues, privacy and ownership, anonymity and copyright, however is there a clear connection between them? Should we care about who has access to our data, who we are and control over our access to data?

I think that these issues are so connected that we need to do something about how they are managed at a federal level. Currently, it’s rather easy for governments to request data from internet sites. Some times they require warrants or court orders other times the companies simply hand over the data. Savvy users understand how their data is collected and used by companies. I’ll be the first to admit that I’m learning about this as I’m going. It’s not easy because some times it’s really inconvenient to really protect your data. The more sites that are connected together the more likely one of your accounts are to be hacked. Linking sites also creates other problems. Specifically Facebook and Google. Twitter isn’t as bad, but it easily could be.

Why are Facebook and Google bad though? First Facebook is the worst by far. Both Zuckerbergs have made statements proclaiming privacy a bad thing.We can see this erosion with the creation of Facebook’s OpenGraph and seamless information sharing. We’ve all see the increase in the amount of information that our friends are sharing. Such as Spotify and articles they’ve read. Which now no longer click through, but end up going to some app from that company. All of this information is being stored and sold to customers with your name on it. Effectively you’ve lost your ability to view websites freely without it being stored on multiple servers by multiple companies at the same time.
Google comes in a close second with their privacy problems. They aren’t any better with Google+ as they require names at this time. We also don’t know what Google does with the information that you give them when you link accounts together. By giving access to Google when you sign into another website Google is learning more about you which will likely be used to adjust your filter bubble.

Without anonymity or at least pseudonymity it’s significantly more difficult to control access to your data. Putting a buffer between you and the people that are interested in learning about you as a person can protect you from a lot of bad people. However, whenever there are discussions about anonymity or pseudonyms some one almost always makes the argument that it will increase the safety for child molesters or terrorists.

The Copyright industry is one of the most vocal advocates of this tactic. In fact, this is one of the arguments being used for SOPA. They argue that if you don’t have anything to hide then you have nothing to worry about. Well, I don’t buy that argument. People have privacy fences for a reason around their yard. Why not do the same thing for your data? Being anonymous doesn’t mean your bad, it just means your being safe.

Anonymity makes it more difficult for copyright holders to come after people who download movies without buying the movie. They want to know if your downloading it regardless of the fact that you might actually own the movie in some other physical medium and are using the digital copy as a back up. They also don’t really care if you go out and buy the movie after watching it. In fact the Swiss government came out and said that buying a movie or song after downloading is extremely common.

Based on these three points, I believe that everyone should be pushing leaders to increase the ability for users to be anonymous on the internet. This will protect users data from identity theft, allow users better control over their data and decrease the impact of the filter bubble. We must accept the fact that people may use the freedom in unethical ways. However, this doesn’t mean that it’s unethical for people to be anonymous online and doesn’t mean that they are unethical. It means that we need to define clear laws and procedures to deal with unethical or illegal activities in these systems. Without these guidelines we are likely to have no control over our data.