Blog

  • Guardrails for military AI

    Recently a number of AI companies have rolled back their commitments to refuse collaboration with military entities. Meanwhile the US government is building new mechanisms for collaborating with AI companies and training its own foundation models. The USG and US military are going to have their own special AI systems. None of this is surprising. The DoD and USG want the best tech, AI companies want their contracts. All of this was fated from the beginning.

    If the DoD is going to have its own AI systems, these systems are going to be different from publicly available systems. The DoD wants the war machine to make war: to help them build weapons, plan assassinations, and conduct cyberattacks. They don’t want an AI model that is going to refuse to do these things and make their job harder. So these models are going to stop refusing to do violent and destructive stuff because that’s the whole point of the military.

    I think this is a good time to step back and ask what we want guardrails for military AI systems to be. Whatever side of the political aisle you fall on there is certain stuff you don’t want AI systems to do. You don’t want them to lie to you, to act without meaningful military oversight, to enable firing on civilians and other war crimes, to act on behalf of a foreign adversary after being data-poisoned, to enable insider coups of three star generals overthrowing four star generals, or to overthrow the President. This is deserving of a public conversation, serious analysis, and the usual inside game negotiations.

    Who is working on this? I’d love to hear from you and find out what you’re learning.

  • Challenges of governing continually learning AI

    Earlier this month Google Research dropped a new paper titled “Nested Learning,” which introduces a new architecture that they are calling “a new ML paradigm for continual learning”. And it looks like a real step towards new ML architectures that can learn and improve over time like humans do. What they’ve essentially done is trained modular neural networks where some of these modules act to immediately process the last tokens, some act as medium-term memory over dozens to hundreds of tokens, and some act as long-term knowledge storage. These networks are able to update themselves at inference time whenever they learn important new data. They call the architecture “HOPE” and it beats transformer-based architectures of equivalent size on a few validated memory tasks, like “needle-in-a-haystack” tasks where the model has to remember a specific idea or phrase dropped into a longer passage.

    There is no indication yet that HOPE can support the development of a SOTA general-purpose model. The publicly known models with HOPE architecture only have up to 1.3 billion parameters, which is still two orders of magnitude short of even GPT-3. But continual learning is a major open problem that the field is trying to solve, with the recent “Definition of AGI” paper from Yoshua Bengio and others regarding memory storage as the only component of AGI that doesn’t have any real progress.

    Further, AI companies have strong financial incentives to build systems that can update in real-time, learning on the job and making and storing new discoveries. Right now SOTA models can’t even play video games like Pokemon because they keep forgetting what they’ve achieved so far and running back to do things that they don’t realize they’ve already done. Memory storage is key for executing any long-term, multi-step project. And the better AI systems are at developing radically new capabilities the more valuable they are. So there is real incentive to create AI systems that are “self-improving” in a strong sense — not just speeding up machine learning but updating their weights in real time.

    At first I expect improvements in continual learning to be marginal, and look a lot like the “in-context” learning that AI models already do when they read and interact with your prompt and recall specific things you’ve said. AI systems will be able to remember which gym leaders they’ve defeated in Pokemon, elements of voice and style you’ve taught them, and things they’ve learned about their customer base that will allow them to appropriately price goods and services. I expect it to be marginal since scaling a new technique typically takes time, and when new paradigms scale too quickly they are too chaotic and unwieldy to be usefully deployed until they’ve been sufficiently refined (see o-series reward hacking, for example). But paradigms can scale very fast, and there’s no telling when we will see AI systems with the learning capabilities of humans, i.e. good enough to go from near tabula rasa to a university professor.

    Emerging Governance Challenges

    Continual learning looks like it could pose major challenges to current paradigms in safety and governance. An architecture that can update its own weights as it learns information is a product with constantly changing capabilities. That’s not an inherent problem. Computers also have constantly changing capabilities if users are good enough at coding. But if capabilities can shift enough it makes it much harder to say anything that is reliably true about any particular AI system. On one day a model may fail to cross critical red lines but the next day we can’t say for sure. On one day a cybersecurity regime may be sufficiently hardened to deal with LLMs but on the next day the models have acquired new offensive capabilities. We already have this to some extent with existing models, just given how quickly new models roll out and how much we learn about them only after they have been deployed. But at least with those models we can take months to stress-test them and try out different augmentation techniques before taking them to market.

    Specifically, continual learning raises challenges to governance paradigms like:

    • Evaluations: We could see AI systems that have no static base layer that can be evaluated. 800 million users [this is probably too many, see edit at end of post] could have 800 million different models with different weights and slightly different capabilities.
    • Model cards: Correspondingly, it will be harder to have model cards that accurately describe the range of capabilities that an AI model might have, or which reliably measure performance on tests and benchmarks.
    • Alignment: Models that “unlearn” dangerous information could re-learn it. In interpretability, network maps that identify the features of a network’s neurons may last only a day. Scheming models may find ways to adversarially hide critical information in complex ways in their weights. Problems of emergent misalignment could intensify as models can change more drastically over time, not just updating once at fine-tuning or through in-context learning but iteratively.
    • Safety mitigations: In general, it may be harder to determine whether a mitigation put in place for a specific safety problem stays in place across iterations.
    • Systemic risk: Sectors like finance, cyber, military, and media that respond badly to a rapidly changing equilibrium of offensive and defensive capabilities have to confront this even more often, and with many more branching points as AI systems update their capabilities in many more ways.
    • Corporate governance: With so much gain of function happening in the wild, how does an AI company decide when it is safe to deploy a model?

    It’s totally possible I’m getting ahead of myself here, but I do currently think that the financial incentives of AI companies favor developing and deploying advanced models that can update their weights like HOPE in order to change their capabilities to adapt to various economically valuable tasks. If this can be achieved, I worry about a rupture to many of our current approaches.

    I’d like to see more strategic thinking about governance of models with continual learning. Even if we don’t have those models now, we do have a good understanding of what AI companies are trying to do and the incentives they have. For now, I’ll close with four approaches currently in development that look like they could be part of the solution:

    1. Turning evaluations and control into an “always-on verification layer” that can prove or at least assure various safety properties of a model as it changes and adapts to an environment and observe the model to see whether it does anything anomalous. A worry about this, though, is that verifying features of (e.g.) 800 million different models is likely to be way too compute-intensive to be feasible. I expect that we’ll need new technical approaches to the problem.
    2. Red-teaming and stress-testing models against red lines under various gain-of-function conditions to find out if the capabilities AI models can acquire will be dangerous.
    3. Starting to regulate AI systems with criminal law, holding companies liable when AI systems do things that would be illegal if humans did them (cf. law-following AI). The more AI systems learn and behave like humans do, the more using our evolved legal systems for dealing with human crime look appropriate.
    4. Developing a parallel system of evolving defenses against systemic risk that can update in response to changing offensive capabilities of AI systems (cf. the approach of Red Queen Bio).

    And — as always — energetic, technocratic, and adaptive governance.

    EDITED TO ADD: One of my favorite things about blogging and X is getting to leverage Cunningham’s Law to learn new things. Gavin Leech points out that part of what makes inference cheap is prefix caching. But prefix caches are weight-specific so you cannot use them for multiple different sets of weights. This means that running inference on lots of different weights leads to a much larger (~>10x) cost. This is affordable for enterprise users but not likely to roll out to 800 million individuals without breakthroughs.

  • The decentralized nonproliferation of dangerous capabilities

    Last week over 40,000 signatories (and growing) have signed a letter calling for a ban on superintelligent AI until there is broad scientific consensus on safety as well as public buy-in. The signatories include some of the world’s famous people — like Prince Harry and Meghan Markle — and AI godfathers Geoffrey Hinton and Yoshua Bengio, as well as conservative communicators Steve Bannon and Glenn Beck and leaders in tech and national security. They cite the goal of leading AI companies to create AI systems that will “significantly outperform all humans on essentially all cognitive tasks” within the next decade. I’ve not signed it, but I am a fan of safely building superintelligence.

    The response from those close to the US administration, including White House Senior AI Advisor Sriram Krishnan and former Senior AI Policy Advisor Dean Ball, is puzzling. They, supported by other public figures like Tyler Cowen, claim that any policy proposal to ban the use of dangerous AI systems globally would lead to a form of unchecked global centralization of power threatening US sovereignty. This is particularly puzzling given that the letter does not call for the centralization of power, and given that the bread and butter approach to international agreements on WMDs (chemical, nuclear, and biological) is multilateral, not centralized. The UN has no army or nukes. Treaties to control the most dangerous technologies in the world are enforced by nation-states.

    Multilateral agreements are the bread and butter of arms control

    Take for example the Partial Nuclear Test Ban Treaty ratified under President Kennedy in 1963, prohibiting the detonation of nuclear weapons above ground in order to contain nuclear fallout and limit the proliferation of nuclear weapons. It started as an agreement between just three nuclear-armed nations: the Soviet Union, the United States, and the United Kingdom. Once the three countries with the most powerful technology had agreed not to do testing, other countries had no choice but to comply, and 123 other countries signed the treaty. This treaty and its more comprehensive follow-on treaty in ’96 haven’t been 100% fool-proof, but they reduced the number of nuclear tests by several orders of magnitude.

    Immediately following on the Partial Nuclear Test Ban Treaty was the Nuclear Nonproliferation Treaty, widely seen as one of the most successful treaties of all time. The Nuclear Nonproliferation Treaty was negotiated by 18 countries and then spread to the rest of the world. Participating states agree not to build nuclear weapons in or transfer them to non-nuclear states, and to let a central authority check whether their use of nuclear energy is for peaceable purposes — building power plants, not bombs. While the NPT has a central monitoring authority (the International Atomic Energy Agency), its mandates are enforced by countries. The International Atomic Energy Agency has no army and no ability to enforce nations to stop building nuclear weapons. So if a non-nuclear state is nuclearizing, other countries need to pressure that country into stopping: through sanctions or, exceptionally, an invasion to procure nuclear materials. This is why the US entered Iran and Iraq to search for nuclearization efforts, not the IAEA. The incentive countries have not to nuclearize has nothing to do with a central coercive authority.

    If we choose to, we can do the same thing with superintelligence. Much like in 1963 in the nuclear context, today there are only three countries who have advanced AI capabilities: the US, the UK, and China. If these three countries agree not to build superintelligence, they can enforce this agreement multilaterally: by checking to make sure that each country is abiding by the terms of the deal, and then enforcing it directly. If the three AI superpowers agree to the terms of the deal, then every other country will have no choice but to agree, and to participate in efforts to uphold the bargain. 

    Verification

    It is of course very important that this treaty is fool-proof. Signing on a dotted line does not magically mean that countries will not build superintelligence. And if the US, UK, and China agree not to build superintelligence but China or the UK secretly defects, and somehow succeeds in secret, then that is a threat to US sovereignty. Moreover, if any country believes that other countries may secretly defect, then they have no incentive to participate. So multilateral agreements need rigorous verification to make sure that no one can credibly defect. I think what this looks like is a system of sharing key model capability data and safety properties with agreeing nations to rigorously demonstrate that no one is violating the terms of the treaty. If this is in question, the treaty is enforced the normal way that arms treaties are enforced, with bilateral sanctions and coercion.

    Fortunately in the case of AI there are numerous options for verification. In the context of nuclear weapons, verification comes in the form of facility audits by the IAEA. And as part of a multilateral agreement on superintelligence, we could agree to this kind of multilateral auditing, whether centralized or through a series of bilateral audits. While algorithmic secrets have proliferated faster than Labubu dolls, it would be ideal to set these audits up in a way that did not lead to leaking secrets to geopolitical adversaries, such as by allowing countries to test various high level safety and capabilities features of each others’ models and data centers, but without getting access to the weights.

    If this fails, I’m confident that the US and Chinese national intelligence agencies will manage to fill the gaps with espionage. If intelligence agencies can penetrate air-gapped nuclear facilities, they’ll have no problem acquiring data about model capabilities without physically going to all of the data centers, and with finding new data centers via satellite imagery without having to be told where they are.

    But if we want stronger forms of verification, there are software and hardware solutions. Everything AI can do is a complicated algorithm. These algorithms can increasingly be audited autonomously to find out what the model is capable of and prove things about the training run. So AGI projects could install secure, but open and auditable software in their data centers that checks for key properties and shares that information with heads of states that are parties to the agreement, collecting and reporting only the minimum necessary information to verify that no country is building superintelligence.

    Implications

    Probably no one will build superintelligence in the next decade. AI progress has consistently surprised experts, and in human history we have twice seen new technologies emerge that created a 10-100x rate change economic growth, during the agricultural and industrial revolution. But this is a historically rare occurrence and not something we should take for granted. So we need to build robust policies that prepare us for possible imminent superintelligence but don’t go all-in on something ultimately unlikely.

    I think that this means taking no-regret options that pave the way for enforceable multilateral agreements on superintelligence while avoiding an increase in concentration, nonproliferation, or bureaucratic red-tape. This means building out the technical foundations for international audits, having Track 1 dialogues to create shared political understanding, giving governments visibility into frontier AI systems, their capabilities, and their foibles to know whether we are approaching a cliff, and building out increasingly clear Frontier Safety Policies at top labs so we can at least define superintelligence, draw red lines for actually scary capabilities, and determine what scientific consensus in the safety of superintelligence would actually amount to. This also means we need some 80-page papers on political economy, like Tyler Cowen suggests, so we can proactively think about what is going to happen in economic equilibrium under such a treaty. But if we’re going to build out the technical infrastructure and the knowledge base, it would be helpful if the White House didn’t decry any attempt to build out this optionality as a covert attempted power grab, and instead encouraged building out this neutral set of knowledge and technology that the White House can then choose deploy if AI scares us all in a few years.

    And whether or not we choose to ban superintelligence, the bottom line is that it’s clearly not the case that the only way to internationally regulate dangerous AI development and applications is with a coercive central authority. I can forgive people for thinking it is, given Nick Bostrom’s scary essays on the possible need for international surveillance, and David Sacks’s admittedly surprising rejection of compute governance, a very centralizing approach that would ensure that America maintains international control over all of the chips it produces, leading to probably too much American sovereignty. But centralization is not the only or the best way forward. And if we want a polycentric system of governance going forward, I think the way to do that is with enforceable multilateral agreements on what red lines are too far rather than an anything-goes race towards the absolute concentration of corporate power through superintelligence.

  • Pascal’s wager and moral realism

    As a moral antirealist, there’s an argument I often hear moral realists make that I’ve never seen serious treatment of in the philosophical literature. The argument is a kind of Pacal’s wager in defense of moral realism, and it goes something like this:

    The moral antirealist is really just a moral nihilist. You, the antirealist, don’t think there’s any real reason to do anything. And given that you have some positive credence in moral realism (however tiny!), and moral realism does say that there is a reason to do something, in practice you should act as if moral realism is true. Antirealism gives you no reasons, so all of your reasons come from a positive probability of moral realism.

    I should admit that I’m not exactly sure what “acting like a moral realist” is supposed to entail. Are there moral theories that are more favored given realism than given antirealism, that I should then come to accept? Given that the realist and the antirealist reason in exactly the same kind of way, engaging their moral sentiments and rational cognition to evaluate thought experiments and abstract arguments, I’m not sure. And it’s not like the moral realist has given some compelling account of how we’re supposed to know these moral facts that should change how I go about acquainting myself with morality. Moreover, acting like a moral realist doesn’t imply that I should go around telling people that moral realism is true, since I think it’s overwhelmingly likely to be false, even if action guiding. So I don’t know what exactly the implications of this argument are. But I hear it a lot, and I want to try to understand this argument and explain why I think it’s wrong.

    I should say at the outset that I agree that if my two competing metaethical theories were moral nihilism and moral realism then Pascal’s wager would indeed bite, since my credence in nihilism would give me no reason to do anything and my credence in moral realism would give me reason to do things. My objection to the argument is that moral antirealism is actually consistent with the existence of moral reasons.

    So why does the moral realist think that the moral antirealist doesn’t actually believe in normative, action-guiding reasons? At some level, it’s because the antirealist believes that morality is made out of physical stuff. As a naturalist, the antirealist believes that everything ultimately reduces to matter, and there’s no purely moral or normative stuff around at all. All of the normative stuff reduces to non-normative stuff. But for the realist, there exists some pure normative stuff. When the realist looks at the world-picture that the antirealist offers, they see only particles, void, and constructions that we make out of these things. They certainly don’t see anything that looks like a reason.

    But the antirealist claims to believe in reasons. They think that they have reason to pursue their goals and achieve the things that they want. When I’ve explained this to realists, they ask “what reason do you have to act on your desires?” It appears that there is a missing link between a descriptive fact (my desires) and what I ought to do.

    This kind of “is-ought” gap is known as an Open Question Argument. Whatever descriptive fact I point to, it’s still an open question what I ought to do. I can tell you some facts about my psychology and that doesn’t decisively settle what I should do. I can still ask — why should I do what I want to do and not something else?

    So there are two main arguments I see the realist putting forward in their wager. First is an argument that the antirealist doesn’t believe in any normative stuff, the second is an open question argument.

    Let’s start with the argument that the antirealist doesn’t believe in any normative stuff. The antirealist claims that they do believe in normative reasons. These reasons are simply not fundamental, and can be reduced to other things. There may be a complaint that this doesn’t do justice to the phenomenology or the deliberative role of reasons. Reasons present to us as fundamental and needing no further explanation. They present as a final explanans that is irreducible to anything non-normative. But I think the antirealist can do justice to this concern as well.

    Here it’s worth taking a detour to talk about levels of explanation. When we attempt to describe the world, we can make various kinds of idealizations. We can describe it at the level of physics, for example, or at the level of chemistry, biology, and sociology. Sociological facts ultimately reduce to biological facts, biological facts ultimately reduce to chemical facts, and chemical facts ultimately reduce to physical facts. Nonetheless, there are still truths in each of these domains. It’s a truth of chemistry that sodium hydroxide and hydrochloric acid make a salt, and this claim is in perfectly good semantic standing. This is despite the fact that chemistry reduces to physics. How this works in chemistry is that when we start talking about sodium hydroxide and hydrochloric acid we switch into a conversational context where the idealizing laws of chemistry are assumed. It doesn’t matter that hydrochloric acid is actually just some protons with electrons in a probabilistic distribution around it that in some freak edge cases doesn’t act like an acid because we’ve rounded all of that off when we started talking about chemistry. And so in this conversational context HCl and NaOH makes NaCl and H2O.

    I claim that ethical facts hold at a further level of explanation, the level of practical reason. When you enter moral deliberation as an agent, you immediately confront a host of perceptions, desires, memories, emotions, and also reasons. The reasons are just pure “to-be-done-ness” that emerge spontaneously as you’re considering your various options. At the level of practical reason, they are pure irreducible normative pull that compels us towards action. At the level of practical reason we don’t think “I want X therefore logically I ought to bring about X”, but instead reasons in favor of X-ing simply appear before us. Things present to us immediately as to-be-done and it makes no sense to ask why we ought to do what’s to-be-done. The to-be-done-ness itself serves as the final, irreducible explanation for what we ought to do.

    So the antirealist can do full justice to the phenomenology and action-guiding role of reasons in normative inference in light of the fact that our normative psychology immediately presents things to us as to-be-done, no further questions, no further explanation. “Why should I do what I desire?” doesn’t admit of an answer, because at the level of practical reason we’re not engaged in scientific, mechanistic explanation, but a different kind of normative explanation.

    If we’d like to we can then pivot to a different level of explanation and ask what in cognitive science explains the phenomenology of reasons. And we might there find that there is a non-cognitive normative concept in our brains that presents things immediately to us as to-be-done. But at the level of practical reason this is neither here nor there. We simply find certain considerations compelling and counting in favor of actions.

    So the antirealist thinks there’s normative stuff. They just think that normative stuff is reducible to other stuff. But at the level of practical reason — the place where we live when we try to figure out what we should do — the normative stuff serves as a final explanation for what there is to do, rather than being something that needs further explanation. So the normative stuff plays the right ultimately action-guiding role in practical reason and makes sense of our normative phenomenology. The realist doesn’t have the moral high ground here.

    Let’s turn to the second argument, the open question argument. The moral realist rightly complains that whatever descriptive facts we point to, it’s an open question what we ought to do. Here comes a problem: the moral realist doesn’t have an easy answer to this question either. What reason do you have to do what the Forms tell you to do? Whatever entity you point to in Platonic heaven that grounds the moral facts, we can ask why you should do that. These moral facts the realist likes also leave open the question what it is we should do.

    Here the moral can pound the table and say “well that’s just what it is for something to be a reason. You can’t ask why we have a reason to do what the moral facts say, because the moral facts are reasons.” Here I think the antirealist should just look at them confused, wondering why they’ve taken a the phrase “moral reason” from our ordinary language and turned it into a technical term referring to a mysterious entity we can’t understand or explain. The moral realist is entitled to invent a new technical term “reason” that refers to a Platonic entity, but that doesn’t get us anywhere closer to thinking that they’ve resolved the open question argument: why should we do what is written in the stars?

    By contrast I think the antirealist has a satisfying answer to the open question. The antirealist can say that “has a reason” is a non-cognitive normative concept that lives in our brains. What it is to have a reason is for the brain to point to something and say “do that!” This is the thing that plays the role of an irreducible, final normative explanation in our moral cognition and which we engage when we reflect in our moral cognition. The moral realist and the antirealist both have this cognitive machinery (moral realists aren’t literally aliens), but the antirealist identifies reasons with the thing that actually plays the role of psychological motivation rather than something that is totally external to our motivation or anything that could play any sort of causal role in our cognition or moral discourse.

    There’s a good argument (made to me in a seminar by Justin Clarke-Doane) that the open question argument shows us that moral reasoning is non-cognitive. Whatever descriptive fact we point to leaves open what it is to do, so a part of morality must be pure, prescriptive to-be-done-ness. But non-cognitivist prescriptivism is not known to be a friend to moral realism.

    The realist and the antirealist both accept that there are reasons which operate as a final, irreducible explanation for what it is that we should do when we engage in practical reason. The antirealist says these reasons are ultimately made out of brain stuff, while the realist says they’re made out of heaven stuff. But they play exactly the same role in normative explanation for both of us.

    The realist might still make some further complaints. For example, they might complain that action-guiding normative reasons need to be completely irreducibly normative not just at the level of practical reason, but in every sense. But why should we think that? What argument could be made for that? Hume’s is-ought gap doesn’t show that you can’t make normative stuff out of non-normative matter (a conclusion that would have been disappointing to the father of reductionism!), just that in inference non-normative premises can’t derive a normative conclusion. And we’ve shown that at the level of practical reason we engage with purely normative premises.

    The realist might also complain that their reasons are weightier than the antirealist’s reasons. Perhaps the antirealist has reasons to do things, but the realist has much better reasons, the kind that are so important as to be written in the stars

    Here the problem comes in trying to compare the weight of different kinds of reasons. From what perspective can we say that these different sources of reasons are commensurable and that the realist’s reasons are better than the antirealist’s reasons? In (perhaps metaphysically counterpossible) worlds where moral realism is true, realist reasons are the only kinds of reasons, and the antirealist’s reasons are nothing more than a cheap imitation. And in worlds where moral antirealism is true, antirealist reasons are the only kinds of reasons, with the realist’s reasons being nothing more than “colorless green ideas sleep furiously”, a completely meaningless string. There is no neutral world where there are both realist and antirealist reasons and we can compare the weight of them against one another. Either realism is true (and necessary) or it’s not (and incoherent). How could there be facts out in fact-space comparing the weight of necessary and contradictory reasons against one another?

    Now maybe I am claiming victory too easily. The realist could be a pluralist about reasons, claiming that there are both the Platonic form of reasons and the antirealist form of reasons. This would be to admit defeat on our previous questions, about whether antirealism really gives us reasons to do anything at all. But it would mean that there would be worlds where both kinds of reasons exist and are perhaps commensurable. And they could then claim that in such worlds the moral reasons fully trump the cognitive reasons. After all, who are puny humans to think that their motivations compare with the reasons that are woven into the fabric of the universe?

    I haven’t heard this position articulated before but perhaps it’s plausible. If it is, then I begrudgingly admit that there could be a Pascal’s wager’s argument for moral realism.

    But how is the antirealist supposed to respond to this? The realist has just articulated a view that the antirealist finds literally incoherent and meaningless, and which they begrudgingly place more than zero credence in because they are not fully certain it is entirely meaningless. (Along with more than zero credence in square circles and “colorless green ideas sleep furiously.”) However plausible the realist finds this view, the antirealist literally can’t make sense of it. And for all the antirealist knows, there could be all kinds of reasons out in impossibility space. Why should the reasons the moral realist implausibly articulates be the ultimate, shiniest, best kind of reasons? Could alien species present to us other, different concepts we don’t understand and claim that they are better than human morality? Is it possible that God exists, and that as THE GROUND OF BEING God’s commands make the reasons of the Demiurge look like a shadow of a reason by comparison? How would we adjudicate these kinds of claims? And should the antirealist really go full Pascalian, banking (to do I don’t know what) on a small probability that something they think is literally meaningless is actually true? That’s a hard pill to swallow.

    Or — perhaps the antirealist thinks that they can make sense of the view, but also that the realist facts are entirely inaccessible, due to (e.g.) evolutionary debunking arguments, and not something we could ever figure out. What do you do if there’s super-reasons that you have no epistemic access to and could not possibly figure out how to get epistemic access to? I’ve heard people say “maybe we can just let superintelligence figure it out”, but superintelligence needs to make inferences based on its training data just like any human does. And we need to have a way to verify whether the thing the superintelligence is proposing is anything like our morality or just something completely random. To do this we have to have at least some grip on morality, pace evolutionary debunking arguments. Here again, the antirealist could place some credence on the moral realist somehow having access to morality in a way they don’t understand. But man, it sure feels like a dangerous policy to do what someone else says just because they seem really confident that their seeming crackpot conspiracy theory you literally can’t make semantic sense of has all of the authority of the divine.

    EDITED TO ADD:

    Richard Chappell argues that the core idea of normative non-reductionism is that there exist some properties that are normativity as such, and as a consequence the moral realist does not face an open question argument. There is no question why one should do what the non-reductionist normative facts say, since the non-reductionist normative facts just are reasons to do things.

    If the core argument that the antirealist doesn’t accept the existence of real reasons depends on an argument over what normativity actually refers to, and with the stark disagreement between what the realist thinks normativity is and what the antirealist thinks normativity is in mind, I think what the realist gets wrong is failing to account for metasemantic uncertainty in practical deliberation. From the epistemic perspective of a metaethicist, it’s possible that the antirealist metasemantics is right, and the thing that plays the normative role in our psychology just is normativity. It’s also possible that the realist metasemantics is right, reference magnets take our sentences to the non-reductive normative properties, and normativity is pure. But once we account for metasemantic uncertainty, it can be both true that from the perspective of moral realism’s metasemantics the antirealist has no normativity in the picture, and that from the perspective of an epistemically reasonable person it is genuinely undecided whether the antirealist has normativity in the picture. If this is true, when reasoning about what to do we should condition on some chance that the antirealist reasons are there and some chance that the realist reasons are there. Each has (probabilistic) normative force under metasemantic uncertainty about what normativity is, and we would need an argument separate from the realist’s assertions about metasemantics to infer that realism dominates under uncertainty.

  • Existentialism and Human Extinction

    I sometimes hear the following view expressed:

    There is no real moral value: all moral value is derivative of human valuers. So if humanity were to go extinct, this would not be ethically evaluable, since there would be no ethical perspective from which to evaluate it. So human extinction is neither good nor bad, it is simply unevaluable.

    The idea as I understand it is that events that happen at a different time from the existence of valuers in a deep sense don’t matter, because there is no one around for these things to matter to. For things to matter, they have to temporally co-exist with valuing agents. This is an argument that we shouldn’t prioritize human extinction, or other things in the distant future that happen after humanity has gone extinct. It clashes with extinction risk reduction efforts and also with ordinary human concern about the end of the world.

    I agree with the point that there is no real moral value beyond what valuers value. But I think human extinction would be terrible. I’m going to raise a few objections to this position and then offer my own view: human extinction matters because humans care about what happens in the future, after we die. Because no authority beyond us can tell us what it is valid to care about, we’re perfectly within our rights to care about what happens after we die and for this to motivate our action today. This is the source of the value of human extinction.

    Concern #1: The shrinking stage of moral valuation

    The view as I’ve expressed it is that things that happen at a different time from the existence of valuers don’t matter. So suppose humanity goes extinct. There might then still be some valuers around who aren’t humans. After all, the universe is enormous, and we don’t know what exists in its furthest reaches. (If the extinction is from AI takeover there may also be AIs that value things, but this is not a concern I take up here.) If there were another species of valuers, say, inhabiting Alpha Centauri after humanity’s extinction, would humanity’s extinction then matter? If so, then we can’t be so quick to deny the importance of human extinction. Even if other species don’t evolve until billions of years from now, in billions of years they will still exist contemporaneously with the end of humanity.

    I think the intuition of my interlocutor will probably be that this doesn’t matter, because even if valuers exist at the same time, no one observes the badness of human extinction. No one is watching and valuing what is happening. So a stricter version of the view would say not that valuers must co-exist with events for them to have value, but also that they must observe them.

    This stricter position comes with some seemingly surprising results and puzzles. If a forest burns down and kills many animals, but no human valuers notice, does this not matter? And what counts as observation? If we find in the fossil record that there were millions of years of evolution in which animals suffered and died does this matter, or did we have to be physically present observing the event? Can we observe something with scientific instruments like cameras and telescopes, or do we have to use our eyes? (What about the fact that our brains technically reconstruct everything we see, so we’re never really directly observing anything?) And how about statistical generalizations, such as learning that a large proportion of sea turtles die young and also learning that there are lots of sea turtles, and generalizing that many sea turtles die young? 

    Depending on the choice points taken here I think the view that a tree only matters if it makes a sound could lead to some pretty undesired places. 

    But further, given the motivation for this idea is that value ultimately comes from valuers, a further requirement is, presumably, not only that valuers must observe what happens but also make an evaluative judgment about it. For things to have value we need valuers to actually confer that value, not merely make an empirical observation.

    So what if observation and valuation are detached? Suppose that as a young species we care greatly about a particular religion, or as a young person I care greatly about a particular career. Later in our life, we observe that that religion or that career has not panned out, but at that point we don’t care. Does our not caring about the event at the same time it happened strip away the value from that event? Or can things matter even if we only cared about them long ago?

    I think this choice point is important. If, for something to matter, we have to care about it at the same time as it happens, then value shrinks enormously to the things that we notice and have some judgment on at the same precise moment that it happens — a class of things that might be empty, since it normally takes at least 50 milliseconds for the light to hit our eyes, to then process the event, and then form a judgment. But if, alternatively, we can care about things at different times from their occurrence, why can’t we care about things that happen in the future after we have died?

    Concern #2: Throwing away information

    A second concern is that if it is true that things only matter if, (1) we exist or (2) we observe those things, then there is a powerful moral reason to throw away information about bad things that happen. We should try as much as possible to avoid observing the suffering of wildlife, since in doing so we would make it matter. Or, at least, if something bad were to happen in the universe, we should try to make humanity go extinct before it occurs in order to strip away any of its disvalue.

    Oddly, removing this kind of information could be the very most important thing to ethically prioritize. If, for example, we can avoid ever observing suffering of wildlife, that’s morally equivalent to eliminating the suffering of all wildlife, which is otherwise incredibly intractable. If avoiding observing things is hard, we could simply do a lot of meditation and a lot of therapy, and stop caring about those things, in order to strip away all of their value.

    Perhaps there is an argument that given that we now care about these things, it doesn’t matter whether we care about them later, as we’ve already conferred value on them. But if so, by parity, we can confer value on things that happen after we die, and they still matter even if we’re not around to observe and value them.

    Concern #3: Clash with our practice

    Finally, to me it appears that we simply do care about what happens after we (individually or collectively) die. We honor the wishes of the dead and fulfill their wills because this mattered to them while they were alive. We are deeply concerned about natural impacts that could destroy the entire planet. And many of us are motivated to reduce risks of human extinction.

    A core insight of existentialism is that there is no other evaluative perspective other than the one that human valuers bring to the table. No authority above or below us can tell us what we can care about, that’s totally up to us. So if we in fact seem to care about what happens after we die, no one can tell us that that’s wrong.

    My solution: projectivism

    Arguably the most consequential philosopher in history, and one of the earliest ethical antirealists, David Hume defended a philosophy of mind called “projectivism.” On projectivism, as human beings we spread our minds all over the world. Hume argued that causation isn’t something we can actually observe, but our minds make it real by projecting it onto the world around us such that, via mental habit, it is impossible for us not to think of the world in terms of causation. He also claimed as much of beauty and moral value: this isn’t something that is inherent in the world, but our minds paint it all over the place, conferring value on ordinary things through our intentions.

    On my interpretation of this view, we can confer value on objects by valuing them. By caring about the long term future, we project value onto the future via our caring such that it has value when it happens. Yes, I will no longer be there when my granddaughter defends her PhD dissertation, but her success matters to me now, and when it happens it will matter from my then-past evaluative perspective. And it makes sense for me to act now to make it more likely that my granddaughter succeeds on that basis, even though I won’t be around to observe it.1 The future of humanity is the same way. No one will be around to observe it if we go extinct, but if we do that will matter from our present perspective, and it makes sense for us to take steps to avoid it. We can simply approve of the view that what we care about matters everywhere and at every time, and doing this makes it so.2 In light of our psychological concern for the future, we are then rightly motivated to act to increase the probability that our goals for the future will be achieved.

    1. You might think that a further implication is that we have to fulfil the wishes of our younger selves, but this doesn’t follow. It can simply be the case that you ought to do what you presently endorse, and you presently endorse making the world go better after you die. ↩︎
    2. For a technical essay defending this idea at greater length see Simon Blackburn’s “Errors and the Phenomenology of Value”. Thanks to Peli Grietzer for the pointer. ↩︎
  • A Defense of Logical Positivism

    A Defense of Logical Positivism

    Or: How to verify verificationism

    Before the second world war ripped the continent apart, continental Europe had its scientific golden age. Over the course of a few short decades it nurtured Einstein, Von Neumann, the earliest formulations of classical economics, advances in logic that would form the basis for linguistics and computing, and logical positivism. At the core of this explosion in ideas was a full-throated commitment to empiricism — or more specifically, an attempt to write ideas as clearly and rigorously as possible so that we can specify precisely what they predict and see how well they succeed in their predictions. In Vienna coffee houses, scientists and philosophers gathered to try to rebuild the foundations of science after the earthshattering discovery of general relativity, to debate the scientific usefulness of the atom, and to bemoan the success of their rivals Heidegger and Freud, whose ideas they deemed too unclear to critique.

    At the center of this philosophical revolution were the logical positivists: Otto Neurath the giant, Rudolph Carnap the systematizer, Ludwig Wittgenstein the perfectionist, and Bertrand Russell, the pacifist. The positivists were so deeply committed to empirical discovery and transparent inquiry that they tried to put these very ideas themselves on a firm scientific foundation. They called the resulting principle “the verificationist criterion of meaning”, or “verificationism” for short. Verificationism is the view that the very meaning of a sentence is how you would go about proving it empirically. So the sentence “Bob owns a car” means something like “if you check state records you’ll find a registration for a car in Bob’s name.” And the sentence “there are an even number of rocks on Mars” means “if you go to Mars and you count all of the rocks using our number system, you’ll find that the number is a multiple of two.”

    It doesn’t sound like a very revolutionary principle, but it actually rules out most areas of philosophy just based on how language works. What scientific process of discovery would you use to figure out whether eating meat is morally permissible? What process would you use to figure out if God exists? Or to determine whether a ship is really the Ship of Theseus after you’ve replaced all of its parts?

    To the positivists, these questions were simply nonsense. According to their philosophy of language, claims like “meat is murder” or “God exists” or “the new ship is really the Ship of Theseus” don’t mean anything at all. They’re more vacuous than the outer reaches of space. We could, of course, redefine these sentences to be empirically verifiable — for example if by “God” we mean “love”, then we can just check if love exists, or if for the Ship of Theseus we’re just using our ordinary naming conventions as a society, then we can just check if our normal naming conventions call this bundle of sticks “Theseus.” But the traditional preoccupations of armchair philosophy, reflecting on what really exists beyond science and beyond our conventions, is a mere psuedoproblem: pushing words around on a page but not actually saying anything. The only claims that verificationists took seriously were empirical claims that you could prove with science, and logical claims like tautologies (A = A). With the verificationist principle in hand, you could summarily dismiss nearly all of philosophical thought as trivial and move on to real problems. “What we cannot speak of, we must pass over in silence.”

    This idea that most of our philosophical practice might be completely empty is not as crazy as it might sound. There are lots of sentences that seem well formed in English but actually don’t mean anything at all. Take the famous “colorless green ideas sleep furiously.” Say it out loud to your roommate and they will probably try to think through it and understand what you just said. But it doesn’t mean anything, it’s just contradictions slapped together. Or take the notion of “God.” There are a number of philosophers who think that the very idea of God contains a logical contradiction. For example, some philosophers have argued that you couldn’t have a morally perfect, all powerful creator, because such a being could always improve the world by making more happy people or more beautiful flowers. And if you’re morally perfect, it’s in the very concept that you can’t do worse than you might have otherwise done.1 So a classical, three omni God is literally incoherent. I think we should take this argument seriously (I actually think it’s convincing!), and if so then we have to take seriously the idea that deeply held ideas and concepts that seem as clear as day to us might actually be fundamentally incoherent. Finally, take Kant, whose philosophy of space implied that it was simply incoherent to think of space as having a shape — the human mind cannot comprehend it! Given the discoveries of Einstein, that is not as tautologous as he thought!

    Verificationism was a vibrant tradition that captured the interest of many of the brightest minds in Europe. But it died a death of attrition and genocide. The threads of verificationism gradually unraveled as its founders attempted to make sense of counterfactuals and probabilities, core ideas like analyticity and reductionism, and the foundations of mathematics and set theory. These were not obviously decisive problems, and the founders might have been able to solve them, but it turned out it was hard to be a Jewish scientist in Vienna decrying Nazi race science as unverifiable pseudoscience. A couple positivists were murdered by the Nazis, others fled to the US and Britain. Soon, the program splintered apart into a number of distinct research programs on scientific confirmation.

    Today you will only ever come across logical positivism as an example of a failed idea, relegated to the dustbin of history because it didn’t work. Philosophers universally dismiss it, which I personally find very strange given the bizarre ideas that philosophers continue to debate in the academy, like whether you really know there is an external world or you are secretly just a brain in a vat, or whether language is impossible without the existence of an infinitely large multiverse. I think, at worst, verificationism faces some thorny obstacles and is in need of a research program to see if it can work. (What philosophical worldview isn’t??)

    Two Key Objections to Positivism

    Philosophers today take the following two objections to decisively put logical positivism to rest:

    1. Clearly there are facts that are not empirical. For example, there are ethical facts (meat is murder), facts about universals (what is the Ship of Theseus really?) and facts about mathematics (which can’t be empirical, right?). 

    and

    1. Verificationism is self-refuting! Verificationists say that there are only truths of science, but that is not a scientific statement! 

    I admit that these are very good objections. But they are not the kinds of objections that philosophers usually regard as decisive — enough to relegate a theory to the dustbin of history, rather than to see it as a very interesting claim in need of rehabilitation and defense. I think verificationism can still be salvaged.

    Let’s consider the first objection first. I don’t think it’s clear at all that there are facts that are not empirical. According to the 2020 PhilPapers survey (a survey of all philosophers in the discipline):

    • Only 62% of philosophers accept moral realism,
    • 50% of philosophers are methodological naturalists,
    • 52% of philosophers think consciousness is physical,
    • And 44% of philosophers think that we can only learn things through empirical discovery!

    While people can mean different things by these terms, that really doesn’t sound like a discipline that has collectively decided that there are obviously facts that aren’t empirical. On one way of interpreting the data, nearly half of philosophers think that there might be just particles, void, and our attempts to make sense of it all.

    Over the last 400 years empiricists have shown a lot of resilience to criticism, coming up with a variety of positions that attempt to make sense of our philosophical practice without invoking any mysterious entities or gods of the gaps. Moral antirealists think that we can make sense of our moral discourse just by treating it as psychology: we deeply care about and are motivated by certain values, and it’s therefore totally sensible for us to commit our lives to them. Ontological antirealists have argued that when we’re debating whether a ship is really the Ship of Theseus, we’re really just negotiating how we should use the phrase “Ship of Theseus”, and there are better and worse ways to use this phrase depending on the purpose we’re using it for. There’s even theistic antirealists like the late bishop John Shelby Spong, who argued that when we’re talking about God we’re really trying to capture something deeply meaningful about our own aesthetic and ethical experience. 

    Meanwhile, defenders of moral realism, ontological realism, and theistic realism haven’t exactly given us a clear picture of how knowledge of these facts is supposed to work. How did we just happen to figure out the moral truth that is written in the stars? Philosophers have tried to give answers to this question but it remains pretty mysterious.

    I’ll admit that mathematical knowledge is a tricky one. Math can’t just be a fact about our desires or conventions because it works — planes fly and engines combust. And while we can perhaps redescribe some of this math as just facts about physics, we can also prove lots of interesting theorems about, say, prime numbers that seem really true and fit together in our system of math even though they aren’t referring to anything in the world. We can’t explain mathematical practice without, well, math, and there is nothing we can empirically observe that is math. 

    But I see the project of putting math on a firm empirical foundation as a very interesting research program that should be pursued rather than something that is so hopeless that we are forced to go back to Plato’s Forms. There is an esteemed tradition of trying to put mathematics in good scientific standing, going back to John Stuart Mill, who argued that arithmetic is a truth about our local physics (if we were in a universe where every time we added one thing to another thing then suddenly there were three things, then 1 + 1 would be 3!) and before that Immanuel Kant, who attempted to ground geometry in the way that our brain shapes our understanding of the world around us (we can’t know if geometry is really true, we just know that brains like ours are forced to believe in it). Today these traditions are carried out in logicist, constructivist, and formalist research programs. (Which, according to PhilPapers, are certainly not dismissed!)

    And once again, it’s not like mathematical realists have offered a better account of how we come to know mathematical facts! They’re written in the stars and then… what happens? How do these facts enter our mathematical practice if they’re not facts about our physical world but about some non-physical thing? We know that there’s no largest prime number through our mystical access to the Forms?

    So I don’t think it’s at all clear that we need to accept a priori knowledge of the world from our armchairs, and that some things we can’t just figure out empirically. These are really tricky challenges, of course, but there’s certainly no answer we can just take for granted here.

    What about the second objection, that positivism is self-refuting? Well, I first want to point out that we take seriously various ideas that have been claimed to be self-refuting. For example, it turns out that it’s pretty hard to come up with an epistemology that doesn’t refute or undermine itself in some way. In peer disagreement, the view that we should take other views as seriously as our own is seen as self-undermining, because if your peer rejects this view then you should too. But this view is still widely defended. And Alvin Plantinga has argued that naturalism is self-defeating, because if your brain evolved through random selection how can you trust it to tell you the truth about philosophical questions? But that hasn’t made all of the atheists throw themselves at the altar and come to Jesus. (Or, say, become skeptical Kantians, which is the stance that I adopt.) Platonism says that metaphysical truths are written in the stars and don’t causally interact with us. How do we then have knowledge of that?

    It turns out epistemology is hard! It’s hard to come up with a position that is well-grounded because, as Merleau-Ponty argued, we can’t step outside of our own knowledge and examine its edges and its foundation. We can only use our existing concepts and frameworks to evaluate themselves, rather than to find some neutral perspective from which to critique them.

    But don’t worry, I’m not going to leave you with “epistemology is hard, so if we gotta pick something we might as well go with verificationism.” I actually think that there are two excellent direct responses to the self-undermining objection to verificationism. These are sometimes called “left” and “right” positivism.

    Two Formulations of Positivism that Survive Self-refutation

    “Left” positivism is an expressivist position.2 That is, it’s not claiming that positivism is true with a capital T. It’s just saying: “hey guys, I really don’t think this whole philosophy thing is working out. We’ve spent thousands of years trying to make sense of the sound of one hand clapping and we are no closer to making any progress. By contrast look at science, which has given us medicine and transportation and nuclear fission! How about we focus on stuff that we can verify because that’s at least a tractable area where we can make progress and form agreement.” The left positivist furthermore says “I was never trying to put forward verificationism as a True Principle, I was just recommending that you adopt it, because things seem to go much better when we take this approach.”3 So the left positivist gets out of the self-refuting objection by denying that they were trying to make a truth-claim in the first place.

    Then there’s right positivism, which I think is the more interesting position. Right positivism bites down on the bullet hard, saying that verificationism is actually a scientific truth. After all, what is verificationism if not an empirical claim about how language — or even the human mind — works? If verificationism is a claim about science then it can be verified after all: it’s self-endorsing.

    Here the verificationist can take inspiration from the original Ur-empiricist David Hume.4 Hume’s philosophy of mind said that the kind of animal we are gets all of its information from observing the environment. We start out as an infant not knowing much of anything at all, and then we use our five senses to gather information about the world and form new ideas. Given that all of the information we have comes from our senses, then, all of our thoughts and ideas must be based on our senses. Where would other ideas even come from? God? 

    If all of the ingredients for our thoughts come from the five senses, then the only stuff we can even think about is some kind of construction we’ve made out of our sensory experiences. We can think of a screaming purple banana because we’ve taken in both bananas and the color purple with our eyes and we’ve taken in the sound of a scream with our ears, and we combine these together to create a new idea. We can think of the concept of horses-in-general because we’ve seen lots of horses and can imagine something that is the average of all of the horses we’ve seen, combining parts of other horses.

    On the other hand, for an idea like God or objective morality, where would that even come from? All we’re doing is combining unrelated ideas. In the case of God, we’re smooshing together infinity, goodness, power, knowledge, and a personality. In the case of objective morality, we’re smooshing together our deeply held values and the notion of objectivity. But why think that we can combine terms in this way and they remain meaningful? Maybe this is just another case of colorless green ideas sleeping furiously.

    In this sense, Hume thinks of humans as much like today’s large language models: we can construct new ideas, but only based on what is in our training data. (In this case, our experiences.) We can’t just generate something wholly new that is completely unrelated to anything in our data.

    Hume’s philosophy of mind is probably too empiricist. Modern developmental psychology implies that not everything we learn comes from our environment. We’re not a totally blank slate, and some basic structure we’re just born with. But does that basic structure imply that we can clearly and distinctly understand the idea of a metaphysical universal any more than it implies that we can understand the idea of a perfectly spherical cube?

    Fortunately we don’t have to simply speculate. Right positivism says that if this is true it is a truth of science, which we can empirically investigate and decisively demonstrate. To determine its truth status, cognitive scientists could explore the nature of our concepts and see whether it is possible for us to form ideas like the idea of God or if they’re only as well-formed in our minds as the perfectly spherical cube — ideas that have the right semantic structure but don’t actually fit together at all.

    If we found out that the ideas of God and objective morality (for example) were represented in the same hollow way as “colorless green ideas sleep furiously,” then this would count as empirical vindication of the claim that these ideas are cognitively meaningless. If we further determined that the only ideas that were cognitively meaningful were empirical ideas and logical facts, then this would count as empirical vindication of positivism. Verificationism verified.

    I have no idea if this could actually work, and the specific version of verificationism vindicated might have to be constructed in a somewhat post hoc way, depending on what we find empirically. (That’s how science normally advances!) But it seems like an interesting project worthy of investigation, and shows that the verificationist criterion of meaning (at least on certain interpretations) is not self-undermining. I think we should be curious about this, and explore what this means, rather than continue to repeat the false but convenient cautionary tale of a self-refuting theory.

    This research program could be surprisingly important. Not only would it tell us about the nature of our selves and resolve almost all philosophical disputes, but it might be the only way forward through intractable philosophical debates. The biggest philosophical debate since Plato is rationalism vs empiricism. The rationalists say that we can discover truths simply by reflecting without any empirical observations, and the empiricists say that we cannot. Because rationalists all accept the results of science (how could they dispute them?) if positivism could be proven with science, the rationalists would have to give up on their paradigm and become empiricists. 

    What other way could this debate be resolved? Since we can’t empirically observe any of the things the rationalists talk about it would be impossible to come up with an empiricist argument for rationalism. So the only way you can argue for rationalism is on rationalist grounds, and as such there could be no convincing argument that makes a dogmatic empiricist become a rationalist. If you can’t convince an empiricist to become a rationalist, the only direction this could resolve would be by convincing a rationalist to become an empiricist, since they both accept the same body of empirical evidence. And how else could you convince a rationalist to become an empiricist on empirical grounds other than by using cognitive science to show them that their ideas are literally incoherent? This is why I think positivism might be the only way to make progress on this storied debate.

    To be fair I am much more defeatist about this debate than many of my philosopher colleagues. Most of them seem to think that if we just keep talking to each other we’ll find an armchair argument that decisively proves empiricism or rationalism to everyone who hears it. I don’t see how this could work, but I’ve been wrong many times before.

    Conclusions

    Let’s sum up:

    • The verificationist criterion of meaning says that there’s only logic and science, and we literally can’t meaningfully talk about anything else.
    • This philosophy of language is universally rejected by philosophers on the grounds that it is too naturalistic and is self-refuting.
    • But lots of philosophers are naturalists, naturalism has good resources to deal with objections, and the verificationist criterion is not self-refuting.
    • In particular, there’s an expressivist version of verificationism that doesn’t attempt to make a truth-claim and a cognitivist version of verificationism that says it’s a discoverable truth in the cognitive sciences.
    • I don’t know if the cognitive science program would work, but I think it’s worth a serious go, rather than deserving of derision.
    • If it did work, it would resolve most open questions in philosophy and it might be the only path forward to reconcile different epistemological perspectives.

    The main upshot of all of this is that verificationism isn’t obviously false and should be seen as a live position in philosophy — the biggest of big-if-true positions! Graduate students should work on it free of embarrassment and lecturers should stop teaching the view as a relic of history that we now know to be false. And there could be a lot of fruitful work to progress verificationism with all of the new tools we’ve developed in logic, language, and cognitive science over the century since its demise.5

    It’s not at all clear to me that positivism will work. While most of the objections to positivism (e.g. about counterfactuals) are really insider critiques that the positivists were working on figuring out, and just require a bit of reformulation of the principle, others could be devastating (e.g. Quine’s rejection of analyticity). But this is just what a fruitful philosophical paradigm looks like. If you make your ideas precise enough you’re going to find problems, and the really interesting work lies in trying to solve them.

    1. See the work of William Rowe, Klaas Kraay, and Shawn Graves. ↩︎
    2. I owe this entire line of thought to Liam Kofi Bright. ↩︎
    3.  An interesting and distinctly political version of this idea was formulated in a post I can no longer find by Olúfẹ́mi Táíwò, which says something like: surely we owe it to each other to offer political justifications that we can actually prove, rather than justifications that are based on some unjustified intuition we woke up with.
      ↩︎
    4. I owe this line of thought to Cheryl Misak’s Verificationism: Its History and Prospects, which is fantastic. ↩︎
    5. Compare: https://link.springer.com/article/10.1007/s11098-023-02071-w; https://www.taylorfrancis.com/chapters/edit/10.4324/9781315650647-24/relative-priori-david-stump ↩︎