Term Paper: Philosophical Issues in Artificial Intelligence

PHIL 828 Philosophical Issues in Artificial Intelligence

Term Paper

Any attempt to regulate artificial intelligence is fundamentally a futile exercise. If we are doing it to make ourselves feel like we did something, then fine; regulating AI would be an effective coping mechanism. It wouldn’t be an effective way of preventing any of the things we are worried about. The objective of this essay is to explain the concerns that are driving the conversation around regulating artificial intelligence and the reasons why such regulation is inherently an exercise in futility.

The White House recently published a temporary and non-binding executive order which essentially amounted to a series of platitudes about how much we should hope that things will be alright, that Congress will do something, that businesses will act responsibly, that technology will not get out of control. Needless to say there is no reading of this order which implies anything could or would happen as a result which might accomplish any of these platitudes.

Professor Petkovic recently published a paper in IEEE Xplore called, “It is Not Accuracy vs. Explainability—We Need Both for Trustworthy AI Systems.” In this paper, he argues that we need both accuracy and explainability for AI systems to be trustworthy. He repeatedly refers to explainable ai systems as XAI. This of course has no relation to the technology product with the same name which has just been released by Elon Musk. This product’s main selling point is that it has absolutely no moral or legal guardrails and it will gladly dopaper anything the user asks, even if it involves explaining how to endanger people or violate moral or legal norms.

Fundamentally and at every turn in this program, the academic conversation misses the point and gets lost somewhere in the ivory tower on its way to publish an already outdated paper, and it’s so frustrating because it doesn’t have to be this way. I argue for a realistic perspective on the way things actually are — call it ontology if you must — rather than a lofty abstract conversation about principles which completely ignore reality. Yes, abstract epistemic conversations are more comfortable than the grit of reality, but we have to start with a conversation about reality if we want to talk about the alternative reality we would prefer.

Melanie Mitchell of the Santa Fe Institute published a paper called, “Why AI is Harder Than We Think,” where she argues that the engineers building artificial intelligence technology arbitrarily name functions and algorithms and paradigms with terms like “attention” and “intelligence” but fail to understand what these terms actually mean. Mitchell, however, fails to confront the fact that these terms don’t have real definitions. Her own colleagues at the Santa Fe Institute published research which famously asks the question, “[what is the value of the results of an IQ test given to an octopus.]”

The point here being that we don’t even have consensus in cognitive science that consciousness or intelligence are real things and not just socially constructed hallucinations derived from what Yuval Harari calls, “intersubjective fiction.” The reality is that artificial intelligence in its entirety has always been an attempt to model aspects of ourselves that we don’t fully understand. And while we rightly point out that these models are incomplete, we often fail to recognize that we also lack a complete understanding of the things the models are trying to imitate.

The most valid concerns about risks associated with the development of artificial intelligence technology relate to alignment problems, but this is another good example of an area where we lack a complete understanding of the topic we’re discussing. It’s easy to say something like, “Ethical AI should be aligned with our values so that it doesn’t become hyperintelligent and decide to turn us all into paperclips.” But what values are we talking about? And who is this us? Artificial intelligence, just like all systems, AI should be viewed as extensionally equivalent to whatever institution created it. It is not motivated of its own volition; it has no agency beyond how to accomplish whatever it was trained to do. If Google creates an artificial intelligence model; that model is motivated by the OKRs (Google’s internal term for deliverables; objectives and key results) which led to the model’s creation. The motivations of the model are the motivations that went into creating its training set and articulating the methodology of its creation and the process of testing it. The model is always extensionally equivalent to the process and institution which created it.

Who then is this us whose values we hope the AI is aligned with? Google’s users have never been values-aligned with Google. Google uses AI primarily to manipulate users into buying products they don’t need or want; Google is arguably a harmful organization which makes itself a necessary evil so that we are mostly willing to tolerate its manipulations and harms on a daily basis. If the AI is aligned to them, then it’s not aligned to us. The idea of it turning us all into paperclips is just a hyperbolic waste of time and energy. The fact is that it’s not aligned with us, and it’s manipulating and controlling all our behaviors on a daily basis. Hyperintelligence and paperclips are nowhere to be seen in the reality, but they occupy much of the thought and conversation nevertheless, wasting the time and energy that could otherwise be used to confront the reality we actually face.

There are many concerns about how large language models like GPT-3 will likely replace many human jobs. In fact, they already are. And that’s going to get worse, not better. Much of the conversation around regulating AI has focused on this issue and the need to limit the risk to these menial repetitive jobs that are at risk of being lost to GPT-3. The risk conversation generally ignores the potential benefits of menial work being automated. The reality is that we are working far more today than every before in history; the real problem is that we have designed a society where the only way to consume ever more than the day before is to work ever more than the day before.

Elsewhere, the concern is around fraud and scams and the risk that the guardrails could be defeated in order to use GPT-3 for evil purposes. This leads to debates about how to regulate and limit these providers from using models like GPT-3 to replace human labor. This is yet again a complete waste of time.

The reality is that these products we’re talking about like GPT-3 have been obsolete for a long time. It is trivially easy to create a new version GPT-3. In fact there are already thousands of open source large language models with better capabilities than GPT-3 which anyone in the world can already run for free locally on their machines without any possible way of limiting what they’re doing or regulating the behavior of people elsewhere in the world who can easy relocate to avoid these kinds of regulations and continue to replace jobs around the world.

Even if a malicious actors decided not to use one of the thousands of free open source large language models that have already been released, the hardware that was used to create GPT-3 costs next to nothing on ebay. The V100 AI accelerator cards that were used to create GPT-3 are currently selling for under $200. The cards from the year before that are even cheaper and I just used them to build a new AI machine which runs more than 3x as fast as the OpenAI API.

Building a large language model to use for whatever nefarious purpose you want costs a lot less than a nuclear program, and all the countries we’re afraid of have already done that. Therefore, any realistic view on the future of AI must start by acknowledging that all of these things are certainties, not potential risks to be regulated away. And that any attempt to regulate these risks away simply means they will happen in North Korea or China or Russia or Iran or any number of other countries that don’t care about America’s attempt to regulate AI and will gladly leverage any tool they can to extend their own power and limit ours, as is only rational for them to do.

If people are using AI tools to commit fraud, that’s already illegal. If people are using AI tools to spread election misinformation, that’s already illegal. If people are using AI to create harmful defamatory content such as deep fakes, that’s already illegal. If people are using AI to discriminate against protected groups, that’s already illegal. The idea that placing regulatory limitations on the research and production of this technology which plays a vital central role in our future will somehow mitigate these risks is just delusional thinking. All that will accomplish is moving progress somewhere else and guaranteeing that we no longer have a voice in the conversation.

Another problem with regulating artificial intelligence is that a big part of the value of artificial intelligence is that it’s generally impossible to tell it’s involved in what it’s doing. Already today, it’s working in the background making predictions and making decisions and a way that is completely hidden to those on the outside of the process. Indeed this black box quality of AI is one of the central criticisms of the technology. So the idea that we are going to regulate something we inherently can’t be aware of is just nonsense from the start. If companies cared to comply with such regulation, they would simply move the departments which benefit from AI to other countries and continue using AI. This means this kind of regulation would harm businesses that comply while benefiting businesses that ignore the law.

We’ve already seen nearly every product category and industry adopt this strategy to avoid regulations all over the world by moving processes to places where they can continue doing whatever they want wherever it’s allowed. The fact is that this technology is invisible and insidious and it’s too valuable for companies not to adopt; the ones that don’t will soon disappear.

We have seen a rise in internal concern around doing things right, and a desire by these companies to appeal to the open-source community for support and often times active development. Much of the work that has happened in the last few years to push the frontier forward on AI has been done in the open source community. It will often be as simple as someone like Andrej submitting a pull request to a public repository like llama.cpp which does something most of humanity will never understand, and suddenly everyone has AI that works an order of magnitude faster than before. These kinds of environments lend themselves to transparent conversations about ethical concerns. The people at the forefront of this field are doing the work in full view of the public and the history books and they know it. They care a lot about the ethics. Standard tests have emerged to check for all kinds of concerns that have been expressed as the tech has developed.

We’ve also seen the emergence of a standardized “model card” where people explain alongside the model release where the data come from, how it was audited, and even what scores it gets on all the standard tests for ethical concerns. This list of tests is always expanding and evolving. I’ve heard it said that the role of innovation is to make it easier to do the right thing, while the role of activism is to make it harder to do the wrong thing. I think we’ve seen this in the way these model cards and their standardized tests have evolved over time, especially when those scores set the tone for the public conversation about the models. Imagine some unscrupulous actor does a bad job of upholding community standards and releases a problematic model. Everyone will see the scores it gets on the standardized tests for problems, and if it’s bad, that’s what everyone will talk about. And anyone can at any point create a new standardized test and run it to compare how all the popular models do with some new concern; if people care then the test is adopted by more and more people.

The system of standardized testing I’ve just described emerged organically based on conversations in the community of developers actually doing the work and based on frank conversations about the kinds of concerns we’ve talked about. Now think back to the congressional hearings on social media in recent years where senile drooling octogenarians grilled tech executives through the haze of their dementia drugs to learn the secrets of how wifi works while threatening to arbitrarily scribble some mindless dribble into law if they couldn’t answer these questions to which any child in elementary school would already know the answer. The idea that these people are the right ones to decide how to address these concerns is more terrifying to me than any concern of what might realistically go wrong with artificial intelligence despite the sincere best efforts of countless people actually working to address these concerns in the industry today.

 

 

Works Cited

Petkovic, “It is Not “Accuracy vs. Explainability”—We Need Both for Trustworthy AI Systems,” in IEEE Transactions on Technology and Society, vol. 4, no. 1, pp. 46-53, March 2023, doi: 10.1109/TTS.2023.3239921.

Mitchell, M., Institute, Malaga, U. of, Technology, P. U. of, & Metrics, O. M. A. (2021, June 1). Why ai is harder than we think: Proceedings of the genetic and evolutionary computation conference. ACM Conferences. https://dl.acm.org/doi/10.1145/3449639.3465421

The United States Government. (2023, October 30). Executive order on the safe, secure, and trustworthy development and use of artificial intelligence. The White House. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/