Information Systems Ethics

The term ethics means "a set of moral principles" or "the principles of conduct governing an individual or a group". Since the dawn of civilization, the study of ethics and their impact has fascinated mankind. But what do ethics have to do with information systems? 

The introduction of new technology can have a profound effect on human behavior. New technologies give us capabilities that we did not have before, which in turn create environments and situations that have not been specifically addressed in an ethical context. Those who master new technologies gain new power while those who cannot or do not master them may lose power. In 1913 Henry Ford implemented the first moving assembly line to create his Model T cars. While this was a great step forward technologically and economically, the assembly line reduced the value of human beings in the production process. The development of the atomic bomb concentrated unimaginable power in the hands of one government, who then had to wrestle with the decision to use it. Today's digital technologies have created new categories of ethical dilemmas.

For example, the ability to anonymously make perfect copies of digital music has tempted many music fans to download copyrighted music for their own use without making payment to the music's owner . Many of those who would never have walked into a music store and stolen a CD find themselves with dozens of illegally downloaded albums. 

Digital technologies have given us the ability to aggregate information from multiple sources to create profiles of people. What would have taken weeks of work in the past can now be done in seconds, allowing private organizations and governments to know more about individuals than at any time in history. This information has value, but also chips away at the privacy of consumers and citizens.


Sidebar: Data Privacy, Facebook, and Cambridge Analytica

In early 2018 Facebook acknowledged a data breach affecting 87 million users. The app "thisisyourdigitallife", created by Global Science Research, informed users that they could participate in a psychological research study. About 270,000 people decided to participate in the research, but the app failed to tell users that the data of all of their friends on Facebook would be automatically captured as well. All of this data theft took place prior to 2014, but it did not become public until four years later. 

In 2015 Facebook learned about Global Science Research's collection of data on millions of friends of the users in the research. Global Science Research agreed to delete the data, but it had already been sold to Cambridge Analytica who used it in the 2016 presidential primary campaign. The ensuing firestorm resulted in Mark Zuckerberg, CEO of Facebook, testifying before the U.S. Congress in 2018 on what happened and what Facebook would do in the future to protect users' data. Congress is working on legislation to protect user data in the future, a prime example of technology advancing faster than the laws needed to protect users.