If any article I write is going to be worth skipping, this will be the one. Reason why: this will about a contemporary, developing topic. Most people who talk about anything of value should know that whatever concept or topic lies in “the new” is going to get the worst discussion. New topics have no depth, very little theory behind them, and every pseudo-intellectual will have their unprincipled, pet-opinion to toss into the ring on the subject. Most of the time, these pet-opinions are forgotten the second after they’re uttered because a newer new comes out to discuss.
Now, Machine learning is a little different. It’s old news in the tech-sphere. Tech schizos have been talking about these algorithms forever, trying to warn normal people that they’re being manipulated by machines into watching ads and having their opinion molded. However, these algorithms have done a cool parlor trick: drawing something, and then the world of discussion set ablaze.
This discussion usually dies down after these tools release. Everyone gets bored and moves on. That’s what happened to Virtual reality, augmented reality, 3d printing, and all the other technologies that people thought would change the world. However, we’re about a year on and the machine learning and AI scene is still lively, and discussion is still bouncing around. These tools have imprinted themselves into culture similar to how social media has, and it’s unlikely they’ll be going away because of it.
So, given that the topic is still new, but obviously staying, I think it’s finally time to toss my pseudo-intellectual opinion in the ring. Still, it’s too new to say anything for sure, and so these opinions will be much weaker than the ones in my other works, but as a techie, and former Andrew Yang supporter, I think it’s fair to say I’ve thought about this enough to say something on the subject:
The Pragmatics of ML.
Machine learning algorithms have been around for a while, as stated. Their ability to suppress and show content is well known. This topic, despite being the more important one, is boring and not what we will be discussing. Algorithms being used in an unjust fashion, despite being more important than parlor tricks and job replacement, doesn’t draw eyes like the artists starving does, so we won’t be discussing it.
To give a summary of my opinion on this form of Machine Learning: If people owned their computers and their systems properly, directly publishing content to a network without any overlord filtering or sorting it for them, none of these problems would even exist. An algorithm used by a user, defined by a user, for the user’s specific purposes is not able to be in hostile, outsider control. This type of freedom would liberate computer users from any type of overlord to do as they please with the devices they own. However, the totalitarians in our governments would hate the idea of their subjects having basic freedoms to use the tools they own how they wish to use them, so, keeping the centralized state of affairs (and just regulating it) will be much more desirable to them. Tl;dr, support the free software foundation and your local peer-to-peer networks.
Back on topic: Machine Learning for replacing human intellectual labor (and possibly physical labor if robotics take off) is the big scare. If artists can be automated, programmers can be automated, and writers can be automated, then what should people do?
Economic
First things first, the things that will keep you fed: Economics. The current way we think of economics could start to break down by deprecating human beings. For most of industrialized history, we have kept humans important by putting them to work in factories, then service-based jobs. This is largely still what we do: Foreign peoples work in factories to provide us goods and Citizens do service jobs.
Machine learning algorithms seem like they’ll harm the intellectual parts of the service industry. This section of the service industry includes marketing, writing, and programming. All information-centric jobs. If these people are put out of the economy.
When people are put out of one industry, they shift to another. So, if programming goes kaput, then those programmers will have to go somewhere else to work. A surplus of labor in other industries will drive wages down, and unless there’s a correction in price of goods to compensate, we’ll see many people go hungry.
So long as big centralized oligrachies aren’t the ones providing our services, we’ll likely see market corrections in the cost of goods. If we can deprecate humans, and thus human salaries, in many areas of our lives, we should see a massive decrease in price of goods, making life easier to live. So, while yes, everyone may be a cashier or nurse, those jobs should pay enough for people to live comfortably.
However, as any astute reader should point out, That’s not the state of affairs. We unfortunately live under an oligopoly, a rule of a very few giants. These giants need not compete with each other if they can cooperate for mutual benefit. It’s happened before in the light bulb industry, who is to say it won’t happen agai-. Oh wait, it already did. Actually, the companies aren’t the only ones participating in oligopoly, the government is in on it too!
In our current state of affairs, with our technologies, the centralized model of doing business is just vastly superior to the decentralized one. A factory can pump out tin cans way faster than your average person, even if he has a 3d printer. Similarly, centralized holds over talent pools (such as in the service industry) is just way too convenient for getting your hands on people. Uber provides easy access to drivers, and the decentralized method of getting access to drivers, at least right now, is just good enough.
To my knowledge, there hasn’t been any industrialized society that’s somehow gotten over the industry centralization problem. Even socialists have socialist governments that manage their factories. So it seems that, in some way, somehow, monopoly or oligopoly is just inevitable.
This oligopoly problem that we have breaks the natural flow of prices downwards. The surplus of labor in unskilled markets will also keep wages down. This is not a good set of affairs for anyone in the information industry. It’s also even worse if this machine learning stuff turns out to be pretty good at learning how to do manual labor or hospice work too.
Maybe Andrew Yang was correct, UBI is the way to go…
UBI as welfare-state induced ideological brain-rot
Universal Basic Income (UBI) is a form of social welfare where in people are given a direct, minimum income, usually coming from taxes. This orchestration of wealth transfer is much better than the bureaucratic nonsense of good old welfare, which puts incentives for money, and instead just outright gives people money unconditionally.
The lack requirements for the money means that no adverse effects, besides pumping a lot of money into the economy, happens. No incentives for “welfare babies” or sleezing your way into disability, just a flat check unconditionally given. In its most ideal form, it’s a prestigious transfer of wealth with respect to the individual’s needs. The government doesn’t tell you what to do with the money, it just takes from someone else and gives it to you. If you’re well off the money could go towards a business venture or vacation, if you’re not, it could go towards a tin of beans. Whatever the case may be, it’s money for you to live on.
Now, this idea doesn’t exist in a vacuum. It’s subject to politics, specifically democratic politics in most nations, like any other policy. So, due to it being a law that can be edited, it’s a law that can be contorted into ugly fashions. In the most basic case, voters will always go with a candidate that has their interests in mind, that’s generally seen as a good thing, but if a candidate is promising to up UBI, then what they’re doing is effectively buying votes with other people’s money. This type of acceleration in populist economics will quickly bankrupt a nation by scaring off the noblemen who bring in business. Maybe this mode of operations, to scare off the machine learning algorithm owning companies from your nation, would be a based-retards way of solving the problem, but it’s far from ideal.
Still, this basic economic problem is something we can just ignore. Let’s just pretend like politics doesn’t affect UBI. Instead, our welfare will be enshrined into our constitution, unchangeable by any amendment. The political maneuvering to make this happen would likely be the start of a new nation, but let’s just pretend its possible in our current state of affairs.
In such a system, we will have Machine learning algorithms doing work for us, and that work will be owned by a centralized entity, and that entity will pay taxes to a centralized government, who will give a centralized currency to the public to then spend at said companies. To me, this sounds like a very inefficient form of state socialism. Just have the government take over the companies, dole out government currency to everyone, and then have everyone redeem goods and services for said currency. That way, you get rid of the ineffectiveness of having multiple administrations (those for business, and those for governance), when you could just have one big government administration and gain the efficiency of centralization.
Either way, rather you decide to go full-blown centralized socialism or technically have “private” firms, there still is a big problem: Individuals must depend on a centralized (or set of centralized) bodies to operate in their interest. These centralized bodies will hold total control over the finances of each individual and, if an individual is deemed to be a problem, may be denied funding. However, this does bring the question of politics back into the subject, undermining the universality of universal basic income, thus making me go back on an axiom I already granted. That is true. However, this subject still is important:
Even if the government is good and UBI works perfectly, that doesn’t mean that the government is eternal. By making the entire population of a nation effectively dependent on an outside resource for monetary compensation and the ability to live, we will make the people weak and unable to defend themselves or rebuild in case of disaster or collapse. Over a couple of generations, if UBI is enacted, those skills that have been given to machine learning algorithms will no longer be taught or learned. They’re simply not economically viable to know. Why learn programming when a robot can program for you. Of course, there will always be hobbyists and gurus interested in the subject and may pursue it at their own leisure, but this is far from enough to maintain a healthy computing scene if the centralized system collapses.
UBI, if nothing else, promotes a fragility and dependence among the population by making learning skills for self-sustaining no longer economically desirable. Even if this is a good thing for an individual, this is generationally bad. If skills in programming aren’t kept up to scratch, they’ll be lost and forgotten by the next generation. There is no one to teach a kid how to program, write, or draw, if there’s no one to teach because there’s no reason to know it. This type of dependence promotion will be a disaster if, or more so when things fall apart.
Remember, all things fall apart. All things die, empires leave, religions fade, ways of thinking crumble into dust. Entropy is a hungry beast, and it will chip away at the order any man makes. If man’s stability is derived from holding onto a giant beast, then when the beast falls, the man will fall with him, unable to walk away from the beast because of his lack of knowledge on how to use his legs.
A shift in economic thinking…
So, if UBI (and other socialistic programs in general) are bunk, then what isn’t? Well, nothing. Nothing currently has the ability to stand up to this possibility. Most people clamber onto the ideologies of now, hacking upon and reinventing them to try to answer to the situation, but the situation is so fundamentally different that a new approach is necessary. Even the Utopian communists acknowledged that man had to work, and if man doesn’t need to work, then what do we do?
Well, we reassess what economic progress means. To your average person, economic progress means an abundance of items of desire for cheap rates. When the economy is good, they can buy more stuff. When the economy is bad, they can’t buy as much stuff. This simple system of thinking dominates the mind of just about everyone. However, there exists a position that breaks the entire system:
If the economy is so effective at producing goods, but nobody is capable of buying them, what happens? Well, firstly we discover that the system necessitates economic transactions between people, secondly, we realize that people cannot make purchases in such a system, and practically, the inability to make purchases is just about as bad as a lack of abundance. Given that the two things being optimized for, high abundance and high accessibility, eventually cancel the good of each other out (with abundance making accessibility go down), that means we’ll have to change the system or ride on the edges of it forever.
Riding the edge of our current system forever is very unlikely because of that position’s instability. That is because this type of mode of operating puts a cap on growth, which our system is optimized to pursue. Stock-holders want to see growth in their portfolios, and if a yuppie declares that they can cut costs, meaning more profits, by replacing a couple divisions with machine learning, the stockholders will go with him. Governments can’t regulate fast enough to deal with this, and even if they could, the lobbying of stock-holders to pursue cheaper goods will always be a threat. This state is very unstable, and could boil over to the worst case scenario of economic collapse at any moment.
Alternatively, we redefine good. Machine learning systems centralize power into the hands of a few, who then use their power to create goods or services that are then sold to the public. Machine learning systems do not require humans, and because they don’t require humans, they decrease the amount of people being paid, consolidating wealth further. Since we have a permissioned based system where the public pays private property owners for the fruit of their labor, the private property owners are capable of denying the public access to their goods if they cannot pay. Since people are unnecessary, they aren’t paid, and thus unable to buy goods. This is our crisis at hand.
The solution is to cut out paying a private owner for the fruits of their labor. This should be achieved through the distribution of manufacturing power among the public population as much as possible. If a man can make food at home, without permission from anyone else, he will not starve even if he doesn’t have income. The man who depends on the factory farmer for his meat will have to pay money for it, which is incredibly difficult is being paid enough to survive is a rare sight in the post-ML world.
The same idea then moves into manufacturing and machine learning too. If a man has control over his own means of production, he doesn’t need permission or need to pay to use what he owns. Instead, he just uses them. If we can put manufacturing in the hands of the wider public, then the wider public can manufacture without permission from a private enterprise or other central authority, thus eliminating the cost of production besides materials going directly into production.
This organizational scheme may sound scary at first, and to the modern man, addicted to his captors, it is scary. However, this is how tribal humans operated, and largely how sustenance farmers operated. People’s means of productions (their hands) were used to manufacture product (usually food and clothing), that is then used for their survival. Taking this old economic scheme forward into the modern age would be the most healthy way to operate our society. However, to do that, we must reject abundance and cheapness as desirable signifiers, and instead adopt ease of access as the most ideal thing to optimize for.
The only way that can happen is if talented and wealthy men now invest resources into cheap, decentralized manufacturing, and the common man adopts schemes of production that are already in line with this. For the commoner, this means growing your own food, fixing your own machines, using software and services that doesn’t require someone else’s permission to use, and generally supporting a local manufacturing economy. The more decentralized, the better. We may not get the utopian localistic economic scheme, but the closer we are, the better.
Artistic
Now, that was one big text wall on boring old economics. I think everyone can agree that we need a break from that subject. So, instead, we will be talking about the issue of art and machines.
What is art?
Art is a fickle thing. It takes many forms. Despite what digital drawers want you to think, art is not just pretty pictures. It’s writing, poetry, music, paintings, plays, acting, and many, many more things. Art is a very varied subject, and these days its getting harder and harder to define it (but that’s more due to the lack of good art rather than there being so much good art).
Art, at least good art that resembles what art is, is derived from reality. It is a human depiction, to ourselves, of something we believe is important or real. A photograph is the most perfect art form because of this. Photography perfectly captures a moment, and shows us a literal image of reality as it was during a certain time frame, however, it lacks the human experience factor that makes art a uniquely human endeavor.
For this, we must look into entertaining art forms: Paintings, music, plays, and poetry all take some aspect of the real world and merge it with the experienced world to make a rhetorically enhanced statement about the nature of existence. Artistic works speak to both the dry factual nature of life and the chaotic experienced nature of life. A good balance of both are necessary for an art piece to be interesting. If art is pure experience, it’s then nichely tied to one person or one set of people, who may not understand or even scoff at what they see due to not being in the in-group for understanding the piece. If art is pure rationality, then it’ll be as dry as a technical manual, causing everyone but the autistically interested to lose interest over the dull facts being shown.
Good art, in this sense, is a reflection of the human experience onto some mode or medium. It should be technically competent and pleasing to attract the best eyes, but, at it’s core, art doesn’t need to be technically competent to be art, it just needs to have “sovl”.
Why machine learning can replace humans
Given the definition I gave of art, people may be shocked or offended at the thought that a non-human, mechanistic being could ever replace artistry. A machine doesn’t experience like a human does, it doesn’t understand social constructs, the importance of tribe, family, or history, the peculiarities of what makes a group different or similar to each other, or the process of manufacturing literary allusions to connect experience and reality. It’s merely a statistical predictor of what is the most common. Machines just don’t get what it means to be human.
This assessment of machine learning is correct. It doesn’t get it, but neither do most people who call themselves artists. It’s the most dead trope of the era to say “modern art sucks”, but it really does. It sucks, and it sucks hard. The drawings people make, the writings people write, and the plays people act out all lack sovl.
Corporate Memphis is a common sight in our visual culture, movie remakes are pumped out for their nostalgia (and therefore consumption) factor, and what most people read are social media posts about idealized and hedonistic depictions of life from people they don’t even know. Maybe someone can say this is “low brow” art, and that higher brow art is better, and they’re right, but this low brow art is still art, and it’s worse than the low brow art of yesteryear. People surround themselves in artistic degeneracy. Everything they look at is commodified for maximum engagement and profit. The closest thing to sovl that comes out of this pit of artistic degeneracy are bad “funny because relatable” minion memes.
Now, this is something that you can’t imperially measure, but you’ll know it when you see it. Compare the esoteric shitposting of terminally online weirdos against the humor of “Jimmy Kimmel Live!”, or the political satire of US News.com’s cartoons page against the layered clusterfuck of a JReg video. The shitposting and JReg videos, while much more crude and less formal than those of Jimmy Kimmel and US News, talk about more complex things, appeal more to the fears and desires of groups in the know, and are, despite not attempting to be relatable, are more relatable by nature of talking about issues in ways that resonate with the audience.
Machine learning algorithms excel in adopting the technical superiority of most artists by adopting their style, but an artist, who is in touch with reality (even if he’s not technically good at what he does), uniquely can make a connection with his audience to say something based in reality. That’s something neither machine learning algorithms nor bad artists can claim to be true about themselves.
The incest problem
Speaking of deriving, a good portion of art these days is deriving heavily from media. Now, that’s not necessarily a bad thing when deriving from media contributes to a bigger statement about reality, but it is a problem when it’s the majority, or in some cases, only thing that is being derived from. If all your art is fan-art, then it’s merely a tributary piece to another piece of media. That’s fine on its own, but when everything is a tribute to something else, and you start getting layers of tributary pieces paying tribute to each other, you reach a form of cultural incest where art about art informs people about cultural reality.
Art, as a rhetorically powerful tool, tells people things. It lets people become aware of perspectives, analogies, and facts that they never could know in their own lifetime. It’s a permeation of humane knowledge (rational fact and experienced reality) to other humans. If your art derives from art and nothing else, then you will be communicating that your art about art says something true about reality.
Take for example, the task of drawing a princess. I’d guess a good portion of people, when doing this, will attempt to draw a Disney princess. This is because, in the collective consciousness of what a princess is, the Disney princess is the most prevalent one. So, you will draw a Disney princess, making art about art; having your piece be a mockery of reality. If a girl wishes to dress like a princess, she’ll study Disney princesses rather than old European princesses. If you show off an old European princess outfit to a European and asked what you are, it’s unlikely that they’ll say princess as you don’t look like a Disney princess.
This, effectively, builds a flanderizing effect into society. The most extreme, media portrayed parts of reality become experienced reality for most. If a woman sees only 10/10 girls while surfing through media, she’ll be pressured to pursue these unrealistic goals, and may gain body dysmorphia problems from it. This is even getting to the point where the concept of “Snapchat Dysmorphia” is coming into question as a real thing. Unreality, that reality that is found in media (social media in this case), is now dictating the experienced reality and cultural standards that we expect our of ourselves.
This art-induced reinforcement of extreme traits that are optimized, as of now, for human attention can be further accelerated through the use of machine learning systems. Machine learning systems pick up on what statistically is the most common given a topic. If these machine learning algorithms end up reinforcing themselves, they may end up exacerbating the characteristics they’re looking for, to the point that they eventually destroy the entire concept through an autistic, mechanistic hyper-fixation. If this is tamed and aimed towards profit incentive, we may see, slowly, that culture twists to an ever flanderizing definition of an object being desirable. In sex, this is the most obvious case. If machine learning algorithms suggest that heftier women are sexually desirable, and that is what is promoted more and more, people may come to believe that these women, indeed, are what’s desirable. This may then lead to a feedback loop where fatness becomes what the machine learning algorithms looks for, and accidentally creates desire for in the audience when they create art. Remember, a princess is only a princess now because it looks like a Disney princess. A good-looking woman is only a good-looking woman if they’re 300 pounds over weight too in this reality as well. Hopefully natural biology can thwart that awful land-whale reality from coming to fruition.
Summary of Machine Learning-based Art
Machine learning algorithms, as of now, are disconnected from reality when they make their art. Because of this, they can never hope to be in touch with reality, which is required for art to be good. Thankfully, this means that true human artists are safe for the time being. This doesn’t mean that people who produce sovlless content are safe, in fact, they’re the most in danger of being replaced.
Those people who produce sovlless art already produce art that is disconnected from reality, it’s just that machine learning algorithms are superior at doing that. Besides some job losses, the only practical change with this will be that the artistic incest loop that’s currently going on will grow faster and faster, making everything more extremely mediocre and samey, and possibly changing the public’s greater tastes to desire what is mediocre, but attention getting.
If the public’s greater tastes aren’t changed, then mass-produced, commodified art may finally collapse in on itself due to its inability to resonate with its audience. In such a case, the good artists will survive, always able as ever to create art that is humane in nature.
Conclusion
We’ve covered a lot in this article, and the topic of what machine learning algorithms can do or will become is changing by the day. It’s unknown if any of this will be relevant or important in a couple of years, or even a couple of months. However, again, this is my pseudo-intellectual take on a contemporary topic that no one knows the solution to. Only time will tell what happens.
However, to not leave on such a non-acting note, it’s important that those in the public do their best to separate themselves from greater society if they sincerely are scared of what may happen. Invest time, energy, and money into what can keep you free from the system, and moderate the type of content you intake so that you don’t end up with some strange body dysmorphia problems relating to breast size or muscle mass.