Blockchain & Capitalism

“It’s not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest”. — Adam Smith, The Wealth of Nations

Blockchain fulfills the most basic principle of capitalism, trust, to a degree that’s without equal to anything else we have today. Without trust, out entire economic system fails. If I can’t be assured that my payment will arrive to you untampered, I wouldn’t trade with you at all. This novel technology ensures the security of transactions by tying every single one together, such that braking, or hacking into a single block, will affect every other one in the chain —hence, a blockchain. In an era where hacks are putting a massive dent in the trust we have for institutions, blockchains democratize that trust and places it at the hands of every person interacting with it. It’s fitting paradox that, in the same way greed enables capitalism (by having people work for their own self-interest), blockchain enables it by relying on the fact that no one trusts one another. A sad state of affairs? Yes, but it’s a solution that works.

I won’t go into explaining the fundamentals of blockchain. There are numerous sources where you can find that out, such as here, here and here. Suffice to say, Mr. Nakamoto’s invention of Bitcoin did indeed change the world for the better, though not perhaps in delivering that much-awaited libertarian utopia. Bitcoin as a cryptocurrency is speculative at it’s best, “a fraud” at it’s worst (according to JPMorgan CEO Jamie Dimon). It’s fluctuating price is a rollercoaster that mirrors the same emotional moods as its investors: ecstasy, or perplexity. I don’t want to negate the validity of cryptocurrencies however. They do have it’s use and purpose, the same way you purchase tokens at a Chucky cheese to partake in the privileges of jumping into a ball pit, or eating a the pizza equivalent of a cholesterol bomb. Certain merchants would want the privacy to sell goods and services, away from the prying eyes of government. That’s all good! As long as they follow the laws of their state. My belief is that it’s underlying technology, blockchain will become the most important technology in the next decade —ahead of artificial intelligence and self-driving cars (unless Elon Musk lands on Mars in 2025 and starts a colony. A timeline which even he calls “aspirational.”)

The processing of transactions worldwide is the engine that drives the economic growth. Adam Smith recalls the necessity of trust between parties to exchange goods: “In a free trade, an effectual combination cannot be established but by the unanimous consent of every single trader, and it cannot last longer than every single trader continues of the same mind” (Book IV, Chapter VIII). This trust is essential before any transaction takes place. The rise of globalism has connected traders from every corner of the world to exchange every single kind of good imaginable. It allows me to drink this coffee right now from Guatemala, while eating bread with flour imported from Colombia and raisins from California, all the while I type this in a MacBook manufactured in China. Milton Friedman famously stated how not a single person on earth knows how to make make a pencil (a retail pencil). Someone had to cut the trees, another had to operate the machinery that stripped the trunk, another had to paint it, another had to market it, another had to take it to the retail location, and finally someone had to place it on the Wal-Mart shelf. Every single one of these transactions has to be recorded to ensure that it’s compliant and someone is not cooking the books. It’s a laborious process that takes days to do and mountains of paperwork. Blockchain eliminates the paperwork and cuts the transaction time in milliseconds.

It’s a tough competition, but blockchain technology could become an even larger player in the next coming years than artificial intelligence. Many companies are struggling to implement AI. Some have successfully applied it to their business (like Google and Amazon), but many, many more have failed to find a meaningful return on investment. It’s also no surprise: current AI training requires very large amounts of data in the first place. A data scientist (or a team) must then parse through the data and ensure it’s “cleaned.” Finally, it’s fed to the AI system and we’ll all hope that it will return a meaningfully low cost function to operate in production. However, all of these steps require time, investment of resources and most importantly, investment of talent. It’s a luxury that many companies cannot undertake. Now I won’t say that Blockchain is a piece of cake to implement, you’d have to install the software, connect it to the payment systems and educate the workforce, but the returns can be immediately experienced: faster processing times, increased security and better transparency.

In economics, there are three ways to be the top player in a market: be first, be best or be the only one (a monopoly). The mavericks have jumped on the blockchain ship and the rest of the world is playing catch up. It’s up to companies now to decide whether to join in, or let the sharks (i.e greedy hackers demanding ransoms in Bitcoin) encircle the less protected ones. It’s a rainy day as I write this, and I’m comfortable going to bed tonight knowing what kind of world awaits for me tomorrow. Which one will you decide to have? One that runs on cryptocurrencies, or one that’s demanding you money through a cryptocurrency?

AI: Savior or Curse?

The first observation in Adam Smith’s Wealth of Nations was on the division of labor. More specifically, how an economy, as it progresses, becomes increasingly compartmentalized into specialized divisions that produce a particular good or service. For example, a single shoe maker will not be as fast, or efficient as 10 shoemakers doing a particular part of the same shoe. Today’s division of labor is being supplied by advanced in artificial intelligence. The creation of a digital mind that can perform complex, human-like tasks is being implemented on a wide scale in enterprises. This has brought great gains in productivity, but also challenges in it’s usage.

Recently, a machine-learning program at JPMorgan just saved it 360,000 hours of interpreting mundane loan agreement. Google uses an AI program to save 15% on it’s energy expenditures at it’s data centers. These and many other examples are playing out in the world right now. Increasing productivity in organizations and thereby making them wealthier.

However there is also the ever-looming challenge of implementing the AI. Unlike a software program, the AI must be trained with huge amounts of data. If the data were corrupt, if the training algorithms were off, or the skills required by the AI don’t match the application for which it was designed, it could easily become a multimillion dollar mistake.

AI doesn’t solve everything, nor is it a technology to be dismissed. Like anything, it is a tool whereby we can use to our advantage, if we think about it carefully enough. If we see AI as a means, rather than an end in itself, business and organizations can bring about the new, 4th revolution that the internet for now has failed to deliver —apparently people feel more distracted than wise when watching compilations of cat videos or seeing the zillionth newborn baby on their Facebook feeds.

The war no one talks about

We are in the midsts of a massive, global war campaign.

There aren’t any corpses (that we know of), no open-plain battlefields, no guns to see. I’m talking about the daily battles being fought in cyberspace between people, groups and nations. Now, these kind of attacks are indeed talked about, but barely at all compared to the scope of what is being carried out. I believe there needs to be more exposure to the dangers of cyber-terrorism and government malware attacks. To ignore, provide dumbed-down explanations or categorize these attackers as solitary lurkers is dangerous and is letting the enemy take the upper hand. Let me explain.

“War is not merely an act of policy but a true political instrument, a continuation of political intercourse carried on with other means” – Carl von Clausewitz.

Hacking traces its roots longer than you’d expect. In 1988, a graduate student from Cornell introduced a worm to ARPAnet (the precursor to the internet), infecting 6,000 government and university servers with a worm. He was fined $10,000 and sentenced to three years of probation (New York Times).

In 2010, Stuxnet was first discovered, and believed to be a sophisticated cyber-weapon jointly built by American and Israeli military forces designed to destroy Iranian nuclear centrifuges.

In 2011, the PlayStation Network was hacked, exposing the private information of 77 million players, in one of the largest data breaches in history. (CBC News).

Earlier this year (2017), the malware WannaCry infected over 230,000 computers, demanding a Bitcoin ransom payment in order to unlock the computer. Over $130,000 was collected by the hackers. The money was laundered through a Chinese bitcoin network -making the money impossible to trace, and the hackers remain at large.

Countries_initially_affected_in_WannaCry_ransomware_attack
Countries affected by WannaCry attack

Hackers, and malware they use, have become increasingly complex, with a larger set of victims at stake. Money has been the primary goal of these hacking attacks, but what would happen if our connected devices become the target?

All these examples are astounding, but none can compare the to the recent hacking attack on Ukraine that devastated the country’s infrastructure. Banks, airports, companies and utilities were knocked out in what is appearing to be a state-sponsored attack by Russia. If this attack were done with target missile strike, it would have classified as an open declaration of war.

Now why am I painting such a bleak picture here? Because it’s reality, it’s what’s actually happening in the world, and yet we still forget our passwords, struggle using computers and continue to use passwords like “12345”, and “password” (seriously, these were some of the top-used passwords in the world last year. Please change your password now if it is like any one of these).

John Steinback once said that “all war is a symptom of man’s failure as a thinking animal.” I can whole-heartedly disagree. There is a lot of thinking behind warfare, and the technical complexities involved in creating malware are a testament to the capacity for man to use his/her reason to inflict damage. In fact, there’s another term for kind of action, it’s malice, and it’s the last (and worst) ring of hell in Dante’s inferno. Even in hell, there’s a special place for those who use their intellectual faculties for evil purposes.

Screen Shot 2017-08-14 at 9.40.06 AM.png
A snapshot of all the world’s cyber attacks by Cyber security firm Norse as of the time of this publication

Let me go back to my opening quote by Clausewitz. Although hacking started as a solitary activity for edgy computer programers, it has evolved to become part of every country’s defense strategy. Leaders will come to see cyberwarfare as an undercover way of obtaining information, sabotaging enemy assets, smearing campaigns (look no further than the 2016 American Presidential Election) and much more. Computer programming is far beyond the realms of engineers and nerds, it is an essential component of society, and must be embraced by its population as an essential skill to learn. Many times, we interact with machines more than we interact with other humans. It is common sensical therefore, to know our tool’s nuances and workings.

A carpenter knows his tools well and knows when and how to fix them should they fail for some reason. Computers are unfortunately immensely complex machines, created by some of the smartest people in history and machined down to the individual atom. We are hopeless to know how a computer works as we could with say, a hammer. However, I still believe it is our duty to try, and in doing so, we could be better informed and protected from attackers that might want to steal our data.

Since every computer is connected to the internet, and the internet connects every device on earth, it creates a highway whereby anything can interact with the raining streams of data criss-crossing one another. We must shift our mentality from seeing things in a physical dimension and switch instead to a digital dimension. Updating our computers and not clicking on phony emails is good, but not enough. We must engage in a holistic campaign to protect our infrastructures, businesses and personal devices, and that starts with the population having the basic knowledge to protect themselves from these kind of attacks.

The United State’s 2nd amendment allows militias the right to bear arms, it is time that this understanding expand for its citizens to arm themselves with the necessary defenses from malware attacks -for the sake of themselves, their business, and their country.

Apps are dead. Long live AI!

Consider this: I want to book a flight from Atlanta to New York for next weekend, and want to get the cheapest ticket available. I also I want it to be with an airline that is connected to my rewards program, preferably on a flight that has wi-fi. How would I do it? I will present two scenarios:

The traditional app way:

  1. I open up my phone and search for flights in the American Airlines app.
  2. I filter it to get a cheap flight that has wifi
  3. Unfortunately it’s too expensive, so I open up my Delta app and repeat the process.
  4. I finally find a flight with a good price, I select it.
  5. I log in and input my credentials
  6. I follow the on-screen instructions and pay with my credit card
  7. I get the ticket and add it to my phone

This is the fastest way you can obtain a flight today. It’s not bad, but I do have to sit down to do all the process. Now entertain for me please, the following vision of an AI-based interface:

The AI way:

  1. I open up my phone, hold a button and say, “Find me a cheap flight from Atlanta to New York for next weekend, make sure it has wi-fi.”
  2. The AI replies, “I found you a couple flights. The top choice is a non-stop Delta flight that departs at 10:00am and costs $250. Would you like me to buy it?”
  3. I reply back, “sure!” and the AI responds, “Great! I put your ticket in your phone”

The AI interaction in this situation is superior to the traditional app interface. It’s faster, more engaging, and makes the hassle of booking a flight less so. Now, I’m not saying that every single interaction should be replaced by AI. There are indeed some instances where its better off that you see a screen and make a decision for yourself. But the fact of the matter is that having an AI is like having an assistant: it reduces logistical hassle and presents you with curated information that is relevant and useful.

ai

Because of the superior productivity that AI provides, I believe that apps will be supplanted by AI services as the primary interface we have with our digital devices. In the long-run, natural conversation (whether it’s in speech or in a text conversation)  with AI programs, designed to serve particular needs, will be the go-to approach for us to interact with our devices.

I envision a future where there will be a “general” AI installed on your device (think of Siri or Alexa), connected to multiple AI services that specialize on a particular subject eg. Finance, biology, football, etc. These specialized modules will have large corpuses of data specific to their topic and allow users to get information or perform tasks particular to that area of knowledge.

There are some who would say, “this is great! AI will take over and we wouldn’t have to work anymore!” Which I completely disagree. Automation does not equal lack of work for people. It simply means that we can better allocate our skills to other areas. Creating all of these different AI services will mean that thousands of developers will be needed to build them, and millions of people will be required to run the companies that collect, assemble and input all of human knowledge into artificial intelligent systems.

igor-miske-177849.jpg

Artists, writers, thespians, anthropologists, designers and a whole host of creative people will be required to create the scripts and personalities that run these systems. Far from making people unemployed and heralding the death of humanities, the mass-production of AI services will be a renaissance for the humanities as it explores how to re-create an artificial human mind that can service its creators.

Should we become worried if these AI systems become our robot overlords, as Hollywood movies and Elon Musk portray? You’ll find out in my next blog post, so stay tuned!

The real problem of AI

I’m going to cut right to the chase here. There’s a growing problem in having advanced AI algorithms taking over human tasks while possessing little, to non-existent ethical benchmarks that assess their actions.

Think of these AI systems as a small child. They will learn about the world through their senses, and depending on their experiences, they will draw conclusions about the world. In the same manner, what we feed into our AI systems will determine how they think and how they’ll act. What happens then, when a robot acts in a way that is advantageous to it’s program but disadvantageous to the welfare of people?

Suppose an autonomous car is travelling along a road. It detects a small object. We would see it as a child (probably who’s run away from his home) and is lying down in this desolate road. The truck senses the size of the object but doesn’t think its poses any risk to it’s driving. It can’t veer around because the road is small, and it has a high priority override to deliver express packages by the evening. The truck judges the risk of the object to the truck to be minuscule, so it proceeds. Ending the life of the child.

This is one such example of what could happen if we don’t properly train our AI to make good moral judgements. A proper judgement requires full knowledge of the situation, so as to assess all the facts. It requires a conceptual idea of good and evil, in order to pursue good and prevent evil, even if it’s at the cost of economic gain. It also requires the proper capacity to judge circumstances. These and many other such considerations need to be put into the algorithms of our ai.

I mentioned something that is extraordinarily important. So important, and striking, that you probably just missed it. The concept of imbuing machines with notions of “good” and “evil.” These notions are necessary, because they are the last reins of judgement when we decide to do something. I might be asked to collaborate on an insider trading scheme. I could feel the pressure to act accordingly because if not my manager could fire me. I could feel the allure of the money that I will be making. However, I know in my heart that this would be the wrong thing to do. And so, refusing all personal gain and instead facing all to lose, I refuse to cooperate, because I sense it is the wrong thing to do.

Similar judgement should be imparted on our robots as they slowly start to overtake more complex, personal jobs from us. AI robots are designed to maximize productivity. But in the case that harm could be done, it must recognize the consequences of its action and act in the way that is best for the prosperity and flourishing of its creators. It should be the last line of defense for any person living near an ai system, so that they might be assuredly protected from unintended harm.

I cannot say how the concept of good and evil will manifest in an algorithm. It seems like so much more intuitive than a set of logical parameters. I might try to tackle this on another post.

 

What I learned from building a neural network. Hint: the robots are coming!

I’m taking a developer certification for using IBM’s Watson AI, and one of the learning requirements is to understand the basics of artificial neural networks. In order to retain the information better and to understand the underlying processes, I decided to actually create a neural network, with the help of Stephen Welch’s excellent “Neural Networks Demystified” video series. You can see part one below:

I honestly did not expect it to be so complicated. Of course, it’s machine learning, it’s not supposed to be easy; but still, the amount of equations that described even the basics of a neural network were…out of my comfort zone to say the least. Nevertheless, it was eye- opening. Artificial neural networks (ANN) are a mathematical and programmatic representation of how neurons and axioms work. I am not going to delve into the mechanics of it, but it suffices to say that these ANNs are the beginnings of a general artificial intelligence: one that can think, understand and display intuition.

neuron.png

A demonstration of how Artificial Neural Networks mimic real, biological neurons. Source: InTech

The implications for this kind of technology are profound. It got me thinking about the economics of implementing such a system, only to realize that we are already in the midsts of a global upheaval thanks to the introduction of machine learning algorithms.

In 2011, Marc Adreessen, an early investor in Facebook, Twitter, Pinterest, and many other Silicon Valley “unicorns,” wrote:

“Software is eating the world.”

His statement still holds true, but I’d change it slightly to say, “AI is eating the world.

Unfortunately, the general public’s conception of AI is limited to Hollywood movies, and is almost completely abstracted from the real-life implementations of this technology. Many are unaware of how much this technology has infiltrated their lives. You can attribute your Netflix binging and endless Youtube video watching to the power of machine learning algorithms providing you with “suggestions” and “recommendations.” These services profile every move of yours, every bit of information, to pinpoint your demographic and provide you content that statistically fits with other people like you.

Yes, in AI, you are just a statistic.

cs humor

But AI does much more than that. Look no further than autonomous cars, self-running factories in China, and virtual assistants to see how this technology will seep into every industry of the market.

With such a powerful tool in our hands (quite literally), it is unfortunate that the labor market, and the institutions that feed into it, are not prepared for this transformational change. Most universities don’t have AI programs in place. Coding is still seen as being in the realm of engineers and nerds. Companies still operate with old OS versions of Microsoft Vista and use fax machines to exchange information. A large portion of the economy is simply lagging behind when it comes to it’s ability to change and adapt to an AI-based economy.

Now, this is not all fault of their own. Artificial Intelligence is a very complex subject, as I initially discovered. It requires advanced mathematics, advanced programming experience and a good amount of years in the practice to develop an effective AI architect. The amount of resources invested to produce such a focused individual is akin to the training regiment of a special forces soldier. It takes a lot of time, energy and talent to produce this worker of the future. However, such a worker will become indispensable for the future economy.

AI is like having a self-replicating mind. Another mind that does not need to be fed, does not sleep, does not complain, does not need health insurance, and is millions of times more powerful in mathematical computation than any person alive. It is the virtue of a capitalist society to employ such a tool if it deems it economically advantageous. It would be illogical not to employ it.

But herein the crux of the matter: A few amount of people will be extremely productive in the creation of wealth thanks to their use of AI, but what will become of everyone else?

productivity.png

Greater productivity is the holy grail of economics. It means the country can produce more, for less, at a faster pace. Global productivity exploded after the industrial revolution, thanks to industrial machines. Then, it sharply increased again with the advent of computational machines. Now, it’s due for another increase with the advent of commercial artificial intelligence. Here are three reasons why I believe the rise of AI is bad news for the global labor market:

1) Job replacement will happen faster than job creation

2) Productivity will be focused in a corporate oligopoly

3) “Enormous Data” will provide these companies a competitive advantage over the rest of the market

 


 

Job Replacement

This is a big one, especially since it’s become so politicized in the last couple of months. Jobs are always replaced by the coming of newer technologies. When the car became mass produced, the horse carriage industry (the traditional mode of transportation for centuries) underwent an irreversible decline. However, the collapse of this industry was supplanted by an even greater upswell of economic wealth created by the car: stables were supplanted by gas stations, horse drivers by valets, streets needed to be paved, cars needed to be maintained, manufacturing increased in order to keep up with the demand, etc. Thus, older technologies are usually supplanted by newer ones thanks to the new jobs it creates.

In an article for MIT Technology Review, Joel Mokyr, a leading economic historian at Northwestern University commented on the increasingly fast pace of disruption:

The current disruptions are faster and more intensive…It is nothing like what we have seen in the past, and the issue is whether the system can adapt as it did in the past.

He further states how jobs that require automation -usually reserved for the lower classes of workers- will be the most susceptible to this change. If these workers are to keep their jobs and adapt to the new AI economy, they must obtain a degree in computer science or a similarly technical field, as well as a specialization in whatever field they will be working in. This kind of education is expensive, and it falls within the responsibility of the government to fund for their re-education. These blue-collar workers usually do not have the resources to pay for a college education. If the government doesn’t help these workers, they simply won’t be able to re-educate themselves for the changing market needs and will fall into poverty. David H. Autor supports this view in his piece for the Journal of Economic Perspectives, “Why Are There Still So Many Jobs? The History and Future of Workplace Automation.” He argues that, due to the rapidly changing dynamics of the AI economy, job displacement will rise significantly if education programs for low-skilled workers does not take place:

…human capital investment must be at the heart of any long-term strategy for producing skills that are complemented by rather than substituted for by technological change.

glass.jpg

A Minnesota factory worker with Google Glass 2. Source: Wired Magazine.

Nevertheless, he’s still fairly confident that AI will not completely displace jobs, but rather complement them. Many blue-collar workers such as plumbers, electricians, HVAC installers and others will use AI to become more productive in their jobs, but not necessarily be replaced completely by it. I agree with his view. Microsoft and Google have both released virtual reality goggles that are being tested to aid workers in their day-to-day work lives. The machine tells the maintenance worker where to put the screws on, where he can find the part that’s missing, etc. In fact, Google has already implemented a revamped version of it’s hyped Google Glass product on a factory in Jackson, Minnesota (This is an highly interesting article which I will probably comment on another time. You can find the original article from Wired magazine here). I do not want to dwell on these commendable efforts. Rather, I am much more concerned with the employees of large corporations that perform task-intensive jobs day-in and day-out. Think of the thousands of workers in Foxconn factories building iPhones, or truck drivers delivering merchandise. It is estimate that self-driving trucks, “could threaten or alter 2.2 million to 3.1 million existing U.S. jobs.” What will happen then? A commenter for the previously mentioned MIT article had some truthful insight when he wrote:

The problem is not the technology: it’s the implicit and explicit social and business agreements we have presently in society.

The ultimate problem with job displacement is not so much an issue with unavoidable technological advances that will lead people without jobs. It’s that us, as a society, have failed to properly organize ourselves to fit the needs of the market and put in the required resources into the training and well-being of our workers. Public companies are put under immense pressure to perform, and have put profits over its people (not that it’s a new issue). If we are to avoid a massive displacement of jobs, we need government and businesses to employ appropriate measures to protect its workers by providing them with the necessary education and skills that will enable them to stay competitive in an AI economy It is our duty to use our God-given talents to help others, and therefore the virtue of a good society to provide means for its people to achieve this end.

 

Corporate Oligopoly

Ah, we enter into a favorite topic of doomsayers and conspiracists. The idea that a few companies will reap most of the profits from a market is far from new: Six movie studios receive almost 87% of American film revenue (boxofficemojo.com), Facebook and Google account for almost 50% of the online ad market and are responsible for 99% of online ad growth, Russia is still controlled by a few oil producers, etc. The list of examples would be endless, and oligopolies aren’t always bad for an economy. They can streamline the production process for goods and services, lower prices for consumers, and provide greater profits to its shareholders.

I strongly believe the AI market will inevitably become an oligopoly (if it isn’t one already), and profits will become even more concentrated in the future. Facebook, Alphabet (parent company of Google), Amazon, Alibaba, Microsoft and Netflix are the leading technology companies in the world. They’re all S&P 500 stocks, have delivered returns much greater than the market, are leading the world in AI implementation and innovating at the fastest rates as well. They also show no sign of slowing down. They have methodically disrupted every industry they have touched -the release of a trademark from Amazon was enough to plunge meal-kit delivery company Blue Apron by more than 30%-, and have digitized many of their processes. They have also concentrated the wealth of these industries among relatively small teams. WhatsApp was bought by Facebook for $50 BILLION and had only 50 employees…50 EMPLOYEES.

Screen Shot 2017-07-27 at 4.11.21 PM.png

Careful there! Each one of these employees is worth $1 billion

Due to a talent shortage in data and AI, these companies compete one another by offering perks and stock options to employees. Startups also frequently do this, as a way to defer salaries to its employees while it starts earning money. Its fine and all, except when these companies grow to enormous valuations and the first few employees hold the majority of the company’s wealth. Amazon still pays its warehouse employees $12 the hour (per Glassdoor), while the company’s valuation is worth $500 billion and its CEO is the richest man on earth (as of July, 2017). A recent article by the Guardian newspaper showed how Nicole, a cafeteria worker for Facebook’s headquarters, still lives in a garage with her family and barely making ends meet. “He doesn’t have to go around the world,” said Nicole. “He should learn what’s happening in this city.” She’s referring to Zuckerberg’s highly publicized world tour that started as his new year’s resolution to “get out and talk to more people.”

“They look at us like we’re lower, like we don’t matter,” said Nicole of the Facebook employees. “We don’t live the dream. The techies are living the dream. It’s for them.” Source: The Guardian

It’s unfortunate cases like Nicole’s that highlight the growing divide between the middle class and the high class being populated by techies. In a new report highlighted by CNBC, a record number of Americans were millionaires in 2016 – there was also a record 50% decline in the people who qualify as middle class, and “One in three say they couldn’t come up with $2,000 if faced with an emergency.” Thus, the corporate oligopoly has concentrated the wealth of the new economy to it’s founders, and the promise that the masses will be liberated to freelance and work on their own thanks to new digital technologies, is shown to be false, except for a fortunate few.


The Rise of Enormous Data

Think Big Data was too big too handle? Enter Enormous Data. Seriously. It’s the new buzzword in the industry.

Stay tuned for updates and I appreciate your comments and suggestions.

Sources:

Law & Order: Episode 2

See the first post here.

The previous post finished with a question. I hope to finish this one with an answer. I asked “how do we know what kind of laws are necessary?” and, “can they be prevented from infringing on an individual’s rights?” Since these questions are so large and encompass thousands of years of discussion, and since this thread was created to answer a single web post (also, I consider myself far from being the ideal person to give an authoritative response in this topic), I will digress here to focus on exactly one question:

“Don’t like your rights taken away? Don’t try to take the rights of others.”

Please, please see my first post for context, before reading on. I know you will ignore this but at least my conscience will be clear…. Still haven’t read the first post? Fine then, let’s continue.

We stated in the previous post that no matter how much freedom a state has, it will still have opinions, for laws decidedly tell what is acceptable or not (at least in the civic arena, not necessarily in the moral sense). If a state has opinions, it will decidedly not allow you to hold certain views. This is unescapable.

If a state makes it illegal to steal, it doesn’t matter if your opinion is that stealing is ok. The state’s laws will be enforced on you. If you don’t like it, you cannot belong to that state. But what if you consider stealing to be your “right” as a human being? Well, what does it mean to have “rights?”

It seems to me that rights are proclamations of things that makes us inherently human

Why’s that? Because rights seem to denote privileges. You might say that animals have rights, or even trees have rights. But to say that humans have rights is altogether something above and beyond what we would consider animal rights. A dog does not have freedom of expression. It does not have a right to property, or freedom. It has rights that protect it and keep it alive and well, but the aforementioned human rights are extraneous to the dog. We instead, call them essential to our species.

Think of Thomas Jefferson’s famous preamble to the Declaration of Independence:

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.

Rights are intrinsic values that are essential to humanity. Although you would like a dog very much to have a good life, and see it is good for it to have one, it is not a right to the same degree as a human would demand. Why? Because we have intellects, and the capacity to will. An animal is guided by instinct, but a person can determine him/herself.

The statement at the beginning of this post is true. We cannot take away the rights of others, because they are intrinsic to themselves. But the context in which it is written in mistaken. Rights should help us achieve goodness. Why? Because rights aim to perfect that which is essential to ourselves. We allow a person to have property because that way he can determine himself and his future. We allow people to have food & shelter because man is not a beast and requires suitable housing to live and prosper. We permit life because it has been begotten to us by our Creator and mourn when it is taken away.

Therefore, personal rights direct us to our intrinsic goodness and happiness, but prevent us from achieving it when guised under our own opinions and prejudices 

Although a person is free to eat rotten meat, the good thing to do would be to prevent that person from exercising that right for the purpose of his overall well-being. The person might think it’s a delicacy, and create an opinion page on the New York Times as to the rights of people to eat rotten meat, but the fact is that the law is protecting him, in order for him to continue expressing those rights which he has twisted to his own designs.

We can now both affirm and refute our statement in question. No, we cannot take away the rights of others, but the reason is not because the person has simply said “it’s his right,” but because we can recognize what are the intrinsic values that make him essentially human and thus what is conducive to his flourishing.

Thanks for reading! I have decided to keep this thread, “Law & Order,” as a topic of discussion when it comes to matters of politics and the philosophy of the state. In no way am I attempting to replicate, profit or make use of the name of the actual TV show.