The philosophy thread (reminder philosophy is not politics nor conspiracy theories)

Nietzsche expresses ideas that are darwinian in regards to society and the social realm.

I guess his ideas are an antithesis to the culture created by religious authorities of previous times.

I could be wrong though from what i remember.

Geneology of morals…nietzsche…

Geneology of how a person becomes an ass.

Believing in philosophical virtue…doesnt necessarily equate to actually having good character.

I like nihilism. I think it’s the most grounded phillsophy.

The only nihilists I meet are financial nihilists, which are basically a group of investors that think the markets are rigged against the little guy and use that basis to justify some investments that are questionable at best (bitcoin, more commonly alt coins, gamestop, AMC, buying and holding a Rolex, etc). And every single one of them I’ve met doesn’t learn from their bad investments and improve their habits, they blame the system and the double down on their bad bets.

So if real nihilists are like that with the rest of their life then I don’t see the appeal. Granted, the people who labelled this “financial nihilism” are finance people and not philsophers, so it could be a misnomer.

3 Likes

Financial nihilism? Sounds like meme phrase where people try to jam something to a place where it doesn’t fit.

Nihlism is part atheism and part realizing most things in society are a construct…

However it doesnt provide substance…
That comes from within. And when you see everything as nothing and meaningless its hard to find value in anything and its very easy to become detached.

Instead of being sucked into the void where everything is nothing and meaningless. You create…a value for things and you create things of value…in order to do this you gotta change your percpetion.

And you wind up finding your own purpose.

Living like your dead…leads to an unfulfilling life
Living in blind ecstasy is just as unfulfilling.
Living according to your own purpose as long as its ethical…is the way.

Whats my purpose…thats my own business…
As to whats your purpose that is your own business…

So yeah…

Take control of your own life, and dont succumb to adversity, fight and overcome your obstacles, and work at honing your skills and be a master of your craft.

I deleted my post earlier, I realised a logic error, and realised there’s an entire connection with Nihilism and a New Kind of Truth. So I’ve spent several hours noodling on this, resolving the idea, and connecting the relevant concepts together.

A Generalisation of Nihilism and Integration With A New Form of Truth

Nihilism is an encompassing concept when the premise is generalised. There is, in my view, a reasonable, practical generalisation of nihilism that can help us to better model the world, our values, and our thoughts; and finally, free us from vacuous chasms of non-meaning and enable a happy acceptance of delusion.

Facts

If you cannot prove anything of yourself, you cannot speak truly of yourself. If you cannot speak truly of yourself, neither can you know of anything else.

It follows, then, that you cannot know anything at all. Even the statement, “I think, therefore I am” contains the assertion “I think”, and that assertion has no proof. How can we show that this is not merely the illusion of thought? If you cannot prove that your perception is consistent, then how can you say anything at all about it?

That’s absurd, you say? Go on, try to show otherwise.

All is not lost, you think, since it then follows that anything is as meaningless as anything else, and if we think something is the case, it couldn’t be meaningfully different to the case of it not. But, can you show this follows if you cannot prove that you’ve even read the text properly, let alone if you even think at all?

OK… this is solipsilly in the extreme. We just cannot avoid some axiomatic ground truth to have even the illusion of thinking be possible to approach!

Axiom Number 1

“I think.”

OK. I think. Therefore, I am. That feels good. I like that. Yet, we still are unable to prove our own consistency, and so cannot show that we have computed even the simplest logical deduction without error. This is still solipsilly. Like this, nothing makes sense and nothing even can make sense in the first place.

Axiom Number 2

“My perception is capable of consistency.”
This is a tricky one. Without this axiom, we cannot justify any statement about our perception, since the consistency of the statement with itself and its axioms depends upon unproven properties of our perception, and cannot be proven without first proving our perception is consistent! A grand old Catch-22 right there. Yet, we know that our perception is a bit wonky, and must admit that it is not perfect, and cannot really meet the criteria for proof, though we can not hold that it is never consistent, otherwise we cannot justify any conception whatsoever and be confident in the result. And memory? What about unlikely flukes? What about showing that others’ perceptions are consistent? What about deception? Oh, I know! Let’s say we only trust the truth value of a cognitive process when it has been repeatedly tested all the way up to Six Sigma certification, for each and every thought! Well, that’s not a million miles away from where I’m going, actually… maybe about a thousand mies.

OK. OK. In order to get anywhere at all that is sure and certain, we need to pretend that it is sure and certain… huh? How can this possibly work. It’s going to be assumption after assumption that cannot be proven, all in attempt to… arrive at truth?

Fictions

I surmise that this whole crusade for truth is misguided, borne of a lack in knowledge at the dawn of philosophy of the nature of cognition, the physics of the universe, and the implications this knowledge has for the idea of free will, all repeated for so many ages that it has become legitimised as an endeavour that nobody questions because humans are really good at hand waving over “irrelevant” details, cognitive biases and dissonances.

There might be another way, but we must cast aside all notions of truth and it being attainable. The aim here is not to reject the utility, beauty and applications of classical ideas. They clearly are able to work for many things, for some reason… let’s see if we can uncover why.

Computation and Language

The structure of human language does not lend itself to the encoding of general predictive models, and humans cannot express detailed models of this class at all, let alone encode them in a language. Human languages are basically shared symbol maps to a common pool of experience, and one that we all slightly disagree on at that. As Wittgenstein discovered the hard way, you cannot construct a self-contained and complete axiomatic foundation that sufficiently encodes “what is the case”, and therefore statements of Philosophy will always have to use language to refer to cognitive models and human experience.

However, there is the implication here that our brains can do something special. That humans brains are capable of knowing truth. Hmmm. But, that seems… odd, and doesn’t make sense, because the rules that govern the activation of neurons in the brain are deterministic, with some effectively random noise floor as a result of thermal emissions, or alpha and beta particles decaying. However, as with essentially random noise, it cancels out and has no effect on overall function. There is no evidence that neuron action potentials are governed by any quantum mechanical effect such that brain function exploits the inefficiency of the universe itself to do to hyper-computational stuff, and even if that were to be considered, modern quantum mechanics cannot be exploited to perform hyper-computational operations, it’s more like exploiting the inefficiency of the underlying implementation of the universe, in that you can entangle a bunch of particles and the superposition of all states essentially tests every permutation in one go, allowing you to solve some non-polynomial problems efficiently. But it is still finite! All this means is that a quantum algorithm cannot be efficiently implemented on traditional computers, because, you know, the CPU cores don’t get entangled into a superposition of all instruction pipelines and test every condition in one cycle.

And, even if the brain were affected by non-deterministic noise, that doesn’t necessarily mean it isn’t approximately computable. If the non-determinism is just relating to the exact timing and energy of noise interfering slightly with firing neurons, then you can still approximate that deterministically and it will behave essentially the same, with small scale divergence due to there being some dependence on initial conditions, but the thing with the brain is that it grounds itself in, predicts and responds to sensory input, and sensory input is triggered by large scale, macro-sized stuff that is far too large for there to be any realistic probability of quantum weirdness tickling your amygdala.

Then, Douglas Hofstadter, I’m sorry… I love G.E.B… but you can go and climb in your strange loop and sulk, because there is no evidence of any vector where a non-computable “strange loop” can have anything to do with the implementation of brain function. Same goes for you, Penrose… I will not dispute your complete and utter brilliance, and I will never approach it, but you’re chasing fairies with inside imaginary microtubules. There is a strange resentful attitude to the notion that human consciousness isn’t special, and I think this is because it contradicts deep spiritual beliefs that have a great effect a lot of people’s lives. Letting go of that can feel like losing your religion, as suddenly all the meaning and memory you had founded on the basis for its reality seems meaningless to you.

The conclusion I posit then, is that the universe might well be computational. There are some good reasons to consider this. Some years back, Gerrard t’ Hooft released the The Cellular Automaton Interpretation of Quantum Mechanics, which explores this and the solution to Bell’s theorem. Also doing a lot of physics in this area is Stephen Wolfram, representing it with hyper graphs where different paths through branchial space correspond to the possible paths and states of entangled particles in superposition, and many more parallels discovered between QM and Relativity discovered. According to Joscha Bach, who undertook several polls at a few different physics conventions, more and more physicists are considering digital physics models as the future of physics. It has been proven that there are infinite rulesets for cellular automata that are turing complete, and every possible computation maps to a state transition on the graph. You can also construct a mapping from any set of state transtions to another, and any possible medium one can construct representations with in a computational universe is necessarily equivalently powerful to any other medium. Yet, you can construct a representation in language that is equivalent to a brain information-wise, but does not undergo analogous state-transitions.

There are also neat parallels between Gödel’s Incompleteness Theorems and various notions this text touches on. The whole project of Russell and Whitehead was to eradicate self-reference from mathematics, thinking that was the vector for paradoxes to sneak in, but it turned out that essentially all of classical mathematics is inconsistent and contains contradictions, and paradoxes can always be sneaked in. The only way to prevent any it, it was discovered is, and achieve completeness and consistency in a model, is with constructive mathematics, where you must construct the result in order to have arrived at it, like a computer arriving at an end state, in order to know at any truth within it.

Hmm. See a parallel here?

Suitable Encodings

Let’s move onto the alternative to truth. We’ve established some compelling reasons to believe truth does not exist in the sense we’d like it to. An essential aspect to the problem is that the universe can never be big enough to fit a complete copy of all of it’s information inside, because, necessarily, a complete copy of all of its information would be precisely the same size and mass as the universe. With any level of recursion down, there is a loss of efficiency and capacity. A computer cannot emulate itself faster or even as fast as itself. There is some necessary lack of information. If computation is the appropriate underlying representation for the universe, then we’re asking for the impossible to have truth. It’s a fantasy we’ve made up, a delusion of a poor Square in Flatland accessing a 3rd dimension. So why does there still seem to be a compulsion to hold ourselves to a standard such as truth? We shall just have to forget about it… all we can have are approximations.

Rather than look at it like some weird logic tree with exponentially branching permutations, almost as if it were some freak Tractatus that Wittgenstein might have conjured on a bad acid trip thinking he was a neuroscientist. Instead, let’s look at it cognitively. You are a cognitive system. You perceive a sensory interface which gets bombarded with noisy information, and your brain learns the patterns present in it and ignores noise. According to some recent theory on general cognition in the brain, it does not operate on the basis of deductive logic. Instead, it uses, most broadly, hierarchies of classifiers, efficient compressions over invariance, generative models, and good old fashioned gradient descent. Our brains receive information at their sensory interfaces, which are filtered through various hierarchies of classifiers and patterns of activity propagate further into other brain regions, lighting up like a Christmas tree strobe-dancing to the universe’s synaesthetic jazz. Through a deep feedback loop, invariance in sensory information guides efficient compressions, which are equivalent to predictive models with high entropy, and the variance between the generated output is compared with sensory input and the error back-propagated to the predictive models, discarding noise. The same feedback loop integrates our perception of internal mental representations, such as when you observe the output of predictive models and cross-check the variance between representations multi-modally across other predictive models, converging on increasingly difficult to falsify models as you learn.

A convergence toward the maximum truth value, with a suitable encoding, then, is in its most essential reduction, a decreasingly efficient compression over invariance. It doesn’t matter if two representations differ in a structural way, so long as they are functionally equivalent under identical input and are equally efficiently, their truth value can be considered identical, as both functions are isomorphic. A truth value corresponds to the relationship between a dataset and a compression. There are parameters that you can control that affect the trade-offs, for example, the more efficient the compression, the more pure, compact and essential the representation. Conversely, the lower the efficiency of the compression, the more detail it contains. There are no assumptions, only arbitrary parameters that we can set, which even those, by proxy of the supposition the entire universe is computational, cannot really be called arbitrary in the traditional sense. That’s philosophy, and this is computation! This is predictive models as truths by analogy, and perfect truth being complete isomorphism between two models, the closest intuitive fit to the classical notion of truth we were looking for.

Back around to Nihilism before we end,… the ultimate generalisation of Nihilism is apply to to intrinsic meaning in the world being, literally, the computational representation that is equivalent to it. It is worth noting that any predictive model in such a universe is also a computational model, and there will theoretically be an infinite number equivalent representations, some more efficient than others. We just don’t have access to the information to derive infer perfect analogues, and not every subset of the change in information in the universe can necessarily be implemented in an analogue, as the information may have some vector of leaking into enough of the universe at some point (if there is a big crunch, for example), and you therefore will be beyond the capacity to manipulate the universe into analogues, even if we could access the necessary information from the substrate. Like Minecraft characters trying to read data about the subpixels that comprise their digital bodies, we can’t get that sort of access to the lower level computational substrate. All we can do is produce predictive models… or suitable encodings… suitable to a degree in which the error is below the noise floor necessary to satisfy some objective function.

1 Like

Philosophy of the game…

The playing field that is reality

Plus the how to of how others hustle and grind.

Social game is playing with words and ideas to trigger certain things in conjuction with reading the room.

But is life a game.

Although playing is important in youth…and as an adult the pretend becomes real.

Life is what you make it.

Recently, I had this idea – a generalisation of Nihilism and its implication of and consequent dissolution in an entirely new conception of truth and meaning… I’ve been in a state of complete immersion, resolving this idea and following the intuitions through… essentially the central problems of philosophy become hallucinations or conflations that emerge as a byproduct of a set of intuitive assumptions about the structure of reality that our brain had evolved a bias towards because it was “good enough” in the balance of local optima – and even surpassed it as our conceptual understanding arrived at critical points that led to fundamental shift after fundamental shift in our basic assumptions of this structure – but we have a tendency to encode new conceptions with the basic conceptual building blocks out of which prior models were constructed, and therein is the vector for the compound propagation of representational error across individual, generational and civilisational memory, and all of our hallucinations were “good enough” to be predictive of the structure of sensory stimuli that is directly relevant to our survival, thus there is no evolutionary selection pressure whatsoever towards genetic mutations that lend themselves to development of radically alternate models of the structure of reality that we didn’t even have the conceptual tools to conceive of until about 60 years ago when we began to understand more fundamentally what neural networks are and how they might be used to encode predictive models of the structure of a dataset (be that dataset a fixed set or a stream of sensory information, it will always have some upper bound).

The generalisation then follows that the very notions of truth being something that can be reached is a hallucination of a bias towards it that is a byproduct of a “good enough” evolutionary artefact, where our reproductive success has never depended upon the evaluation of our models prediction of perception at a resolution sufficient to identify incongruence. Hence, we are so mentally contaminated with the notion of truth and deductive reasoning as being capable of encoding truths about the world (or anything at all if you consider that we cannot prove that our axiomatic foundation is representative of “what is the case”, and nor can we prove that our cognition is consistent and therefore assess the validity of a statement without the possibility of error), that we struggle to even fathom a thought process that does not employ deductive reasoning as a basis for encoding knowledge about the world and ourselves in analytical matters.

The supposition I put forth is that predictive models are the only means of relating information together, and convergence upon complete isomorphism between a predictive model and corresponding sensory stimuli, in the event horizon which we previously assumed that truth existed precisely on, in the limit. However, I think now that the notion that truth exists at all in the limit is a result of our mental contamination with and near-dependence upon these assumptions causing us to sneak them back in through the back door again and confuse us. In this sense a predictive model that is isomorphic to a generative model is then necessarily equivalent to it, and encodes the same information. With this model, the entire problem of Nihilism becomes a byproduct of a dichotomy between truth and the world, and it turns out that Nihilism cannot even be encoded, for the same reasons that philosophy cannot be completely encoded in language, in the proposed alternate conception whereby isomorphisms between what are fundamentally constructive, computational systems are what constitutes a “truth value” between a hypothesis and an observation if we hand wave over the relation of the traditional language to this.

Even our cogitation of axiomatic logic does not appear to actually be a clean and concrete mechanical analogue of a logical deduction, and instead it emerges out of dynamically constituted networks of models, and appears only as an impressionistic apparition in the aggregate… a kind of blurry and unstable form in a swarm of birds, the movements of which emerge as a product of their own spatial and behavioural relation to one another… that seems more like pareidolia than any concrete analogue to deductive logic. Even if that logic is expressed as a more concrete set of interactions between distinct components, like we can build into computers, there is no inherent relation, beyond the functional equivalence of the models in the sense described in this text, of that deductive process to anything in the world for which we consider the deduction to be representative, except that which we subjectively choose to interpret it as being representative of; and even still, the underlying composition of a computer is analogous to the impressions of forms in flocks of birds, just at different time scales and varying short-term instability.

Truth in an ideal sense here then becomes (by means of internal predictive models, not actually founded on logical deduction in any consistent and analogous sense) the similarity between two systems that are constructive in nature. The exact representation is irrelevant, because, for example Turing completeness has been proven (ironically by the very deductive modelling process that we’ve established cannot mean anything without pretending you can actually make sense of anything at all if we hold our conceptions to the standard the delusion of truth asks of us, and is instead emerging out of underlying predictive models) in an infinite number of possible instruction sets and encodings (a Turing machine, Lambda calculus, cellular automata with particular rulesets that govern transformations of the graph, and infinite variety of potential constructible representations that are equivalently powerful). So two constructive models can be equivalent and have complete isomorphism to another encoded in a different representation by way of a translational map that encodes the necessary state transitions to convert from one representation into another in a reversible way.

However, herein we arrive at the same instance of an inevitable Nihilism if we take this perspective globally. The solution is to consider the relativity of Nihilsm depending on your frame of reference. The truth is then something which can only exist from the perspective of the predictive model and what it predicts. For a truth value between a cognitive model and sensory stimuli is a function of the necessary level of resolution required for any evaluation of incongruence between the model and the sensory stimuli to determine error – the truth value converges to one with the resolution of the evaluation of the incongruence in the limit. Most generally, it is the relationship between variables in a system of compression over invariance in the change in information that set the limit of the entropy of the compression in a given construction. Converging on a compression of all invariance in the change in information is approaching isomorphism to the dataset, and the entropy of the model relates to its efficiency. The brain values lower entropy because it can afford the energy deficit as a tradeoff for reduced computational complexity in a nice evolutionary optimum. And for us, a “truth” in the limit can only be where the error is zero and the model is predictive of the information at a resolution in the limit. A model is a “suitable encoding” and thereby a “truth” where it sufficiently predicts the structure of information at a resolution sufficient that no incongruence can be perceived between the prediction and the information. Our cognitive worldviews, then, are difficult to fathom analytically and emerge out of complex dynamically arranged hierarchies of predictive models that generalise over abstractions in the aggregate.

Many of the fundamental problems of philosophy dissolve entirely when this view is taken, including Nihilism which past philosophers thought so hard to escape, seeing it as a defeat of sorts, without realising that in the representation they assume to be necessary for encoding truth, you cannot even arrive at the conclusion that it is a defeat without being in contradiction to the central hypothesis of Nihilism.

It is through the relation of the modern insight into possible implementations of general cognition and theory on human cognition, and its relevance to radically different representational models for the construction of knowledge about the world, and the corresponding evolutionary optimums that biased us towards a hallucination of a false dichotomy, to the context of an abstract and complex society, whereby similar errors of unnoticed incongruence between our working perceptual models and our sensory stimuli exist (which informs the brain’s automated process of finding efficient compression over invariance in the change in sensory information, which constitute equivalence with a predictive model), because for most purposes, we are able to construct a “good enough” but ultimately incongruent model that within the limits of our ability to consciously evaluate in real time this very incongruence, we cannot falsify them because we can’t even see there is a problem at all… the errors thence compound and propagate into derivations of new models thereof and before long we’re in a quagmire of abstractions and delusions of consistency that go unnoticed because they closely enough predict what we are able ot pay attention to that they never appear often to be a problem at all.

This then is also my theory of what is at the crux of the modern problem and the general confused daze that people go about their lives in, completely oblivious to, except in those that develop an obsessive relationship to certain concepts that predispose them to the development of models and the faculty required to even measure this incongruence. The scientific method and the idea of empirical measurement and construction of a theory is our “good enough” hallucination that fudges the details but is sufficiently isomorphic in its tendencies, when followed, to the basic nature of the world, that we’ve gotten close enough to the maximum possible resolution of detail, and struggle to go any further.

More and more parallels between concepts such as entropy and compression are being found with the emergent dynamic of physical systems, such as the conversation of energy, the informational equivalence of different constructions, and the emergence of the phenomenon of time itself. This appears to be moving in a direction that implies ever further that the underlying structure to reality might well be possible to encode in a computational language, and if that’s the case, it is possible to model it using an infinite variety of representations from cellular automata to programs running on a Turing machine with a simple read/write tape head and basic instructions to conditionally index to other points on the tape depending on the value at the present index. The great realisation is becoming that the equivalence of representations means that it doesn’t matter what the representation is so long as they are isomorphic to one-another, meaning you can always define a mapping function to transform any possible state of the universe into any other state. What matters is the semantics of the model itself, its representation is incidental. This notion is being more seriously explored by respected scientists, and there are attempts at cellular automata based interpretations of quantum mechanics and relativity theory.

These notions have profound implications for the philosophical problems that the intelligentsia have obsessed over for thousands of years, and in one swift revolution they essentially dissolve and become either apparitions of a false dichotomy or suitable encodings for which there are infinitely many equivalent constructive systems.

1 Like

The above post is very intractable and difficult to understand, as my personal notes and workings out of an idea often are.

I wrote an answer on Quora that explains most of it in a more digestible way…

There’s an interesting notion which fascinates me, that has profound implications for a surprising range of ideas that humans care about. I am referring to computation, and the notion that computation might be what governs the emergence of the universe. That is to say, it may well be completely deterministic, through and through. While quantum theory is a deterministic theory, the uncertainty is in the measurements, and without being able to account for it, whatever its source, the outcome is not deterministic.

The idea of computation being the proper encoding of the universe is being seriously explored by respected and established physicists, like Gerard 't Hooft, but it is still a bit of a fringe idea, although there are more mainstream variations of quantum theory that more closely align with it.

Firstly, we’ll look at arguments against it, for it, and by proxy. Then we’ll look at the implications of a computable universe, if this works out.

Arguments against it

The most convincing argument against it is Bell’s Theorem, which determines that quantum theory is incompatible with the notion that there are local hidden variables, and it is those hidden variables which, if we knew about, would make outcome deterministic. It should also be considered that it just may not be possible to describe the universe in any kind of language, and thereby it is special in some way that cannot be modelled. Nobody knows.

Arguments for it

Non-local hidden variables have not been entirely ruled out, in which there is an exchange of information that occurs between all particles that is instantaneous, thereby enabling the outcome to be deterministic if they were known. If the underlying medium were computation, then that implies there is a global context in which the entire state of the universe is transitioned from one to the next, and it is only from the perspective of an observer inside that the transfer of information is limited to the speed of light. This would allow for the apparently instantaneous exchange of information between all particles, without being able to use it for any sort of communication from the inside.

Arguments by proxy

Infinities are all-around troublesome and physicists often agree that their emergence is evidence that something is wrong with the theory, such as where current theory breaks down when modelling the conditions in black holes and you get singularities. Continuities and infinities cannot be constructed, and various paradoxes and contradictions emerge when you allow for them, yet relativity represents spacetime as a continuity. Unless the reconciliation of relativity and quantum theory is able to quantise gravity, and eliminate the presence of all continuities and consequent infinities, then the mathematics will be subject to these paradoxes and contradictions.

Constructive mathematics is not subject to any of these troubles, and neither, by extension, is computation. Here, you must construct a result in order to say anything, and you cannot construct continuities or infinities, nor can you do anything random. If constructive mathematics is the only way to be completely free of all paradoxes and contradictions, and be sufficient to model anything with comparable complexity to the universe, this creates an additional complication: if the universe is governed by a system described by mathematics in which paradoxes and contradictions can be found, why is this so? By Occam’s razor, the simpler solution is that is constructible, as you no longer have additional complexities that cannot be answered.

Implications

Under computation, at the lowest level, all that exists are state transitions. These transitions occur with respect to a function that maps one state to the next. If it is the case, there are many implications, and it renders several philosophical quandaries moot.

  • It rules out free will entirely, with no potential vectors for it to emerge.
  • Consciousness isn’t special, and must be a constructible process.
  • Language is sufficient to describe the world, just not in any way we previously thought, and instead it is described as a computational system.
  • Dualism is essentially ruled out, or at the very least constrained in its scope, because there is no meaningful difference between what is physical and what is mental. All mental states map to a physical state, and there is no theoretical limit to the number of possible representations that map to a physical state – the representations are all equivalent

This next one is tangential, and is the most complicated. While you can argue some of this without it, it becomes necessary with computation. It leaves no possibility for anything special that could transcend the limits of computation, and by extension the limits of language.

Truth and logic are a delusion, and their limits irrelevant. You cannot define truth, as no definition of truth can meet the criteria for it, and you can’t define the criteria without having first defined truth – you end up trying to resolve a circular definition. Yet, we find we’re doing logical deduction to arrive at truth when we make a statement about anything, even if somewhat informally, and therefore the limits of knowledge are also the limits of truth. If you’re meticulous, you find that you cannot prove any statement at all. Take the most essential statement – I think, therefore I am – it cannot be proven. You cannot show your deduction is valid without taking it as axiomatic, and even then you cannot show you have evaluated this without error.

How on earth can we say anything at all? I think we’ve got it all backwards. Why do we have to start with truth and logic? If you don’t assume anything, you experience consciousness, and notions occur to you as a side effect. Why is it not sufficient that a notion has occurred to you? If the notion occurs that you are using cognitive models to arrive at knowledge, truth doesn’t need to exist at all. Yet, notions of logic and truth still occur to us, and we find them useful. We’ll see why that is as we continue to explore this.

Theories suggest that the brain uses predictive and generative models to do cognition. These models derive from the measurement of invariance in the change in sensory information. This also includes information about conscious experience, like imagination, thus enabling us to model things in the world and in our minds. Cognitive models are essentially lossy compressions, and the decompression into an approximation of the original input is like a prediction. How, then, do these notions of logic and truth occur to us, and why are they useful despite their limits?

We appear to model primitive notions of logic and truth in tandem with cause and effect, and we arrive at general models of necessity and contingency by identifying invariances between them all – their semantics map onto each other. These models aren’t ideal encodings, they are more like patterns that emerge in flocks of birds, and relating between models is akin to spotting similarities between them. It is similar with computers. They aren’t doing logic, *really. *In essence, we arrange lots of tiny pieces that jiggle about in a coordinated dance, and we observe that our model of logic is invariant with this dance. It appears very consistent, but we cannot show it to be error-free.

We have a bias to fundamentally assume that when we say or think about anything rationally, we are necessarily bound by logic and statements of truth, and to varying degrees, mistakenly determine that the limits of truth are therefore the limits of knowledge. There’s a good reason for this. Converging on primitive notions of logic and truth and getting pretty good at it helps us make better tools, predict threats, and escape to safety – all of which increases our reproductive success. It has also become imbued into our culture and philosophy, which further reinforces the bias.

Finally, nihilism mostly disappears because it is justified on notions of truth, and that we can’t show the deduction of nihilism to be sound. If we disregard the necessity for truth and logic to say anything, and refer to cognitive models instead, even if those models employ something that looks similar, nihilism is banished entirely. Then it follows, if truth is between a cognitive model and sensory information, and is therefore modelled as being a value that converges the more we model it, then meaning is the invariance, and shared meaning is shared invariance. The more true our models, the more true the meaning, and the more true the shared meaning. This is OK because we’ve stopped using traditional logic and truth as a metric of the validity of cognition.

This is all hard to intuit, but I think this is very profound.

1 Like

2 Likes

Do you believe in this?!

:smiley: he didn’t say but but I like it

Also the guy on the left in the screen you shared looks like an old neighbour of mine.

1 Like

Ive got a criticism for you.

Theres nothing wrong with your ideas…

However learn to say more with less…

Because in this day and age of 20second tiktoks…rotting peoples brains your dealing with 5 second attention spans…

Also keep in mind that not everyone is as intelligent as you are…learning the language of the simple folk…it is a valuable skill that will help you in the future. But also dont lose your own voice…have a balance of both and keep being able to speak in both ways.

:slightly_smiling_face::+1:

1 Like

I get what you’re saying. I’ve considered this in the past, and others have suggested it, and I’ve worked through the emotional and psychological process of its implications, and come back out of the other side of it.

I don’t particularly care if anybody reads it or not. I don’t write stuff primarily for other people, I write stuff primarily for my own enjoyment and my own process of working out what I think about the world. I have friends in the real, and more recently some folk online I’ve met through Discord, that I chat to and debate with. Debate for me is not about winning, but the cooperative exploration of ideas through reciprocal challenge.

I don’t particularly depend on it emotionally, as in my ego and self-esteem are not founded on it, and I’m quite neutral in general about a lot of things that others might get into a tizwoz over. I used to depend on it, though, and my ego was a big thing in all this, but what remains today is more internalised.

Nonetheless, I do share my ideas, for two reasons. It is nice when people do engage with it and chat with me, and sometimes people do. I also like educate, insofar as I believe I have something of value to offer, and I think some of the things I talk about (most of it isn’t here on this forum) are important.

When there’s a practical need to explain an idea in a clear and digestible way, I will do so. That was a common requirement in my line of work. But, to a certain extent, some technical language cannot always be avoided because there just isn’t an efficient way to communicate the concepts succinctly whilst also relating them between each other, and do so in a concrete way that isn’t ambiguous or open to interpretation.

I think there’s a certain mutual compromise between simplifying the language and readers learning the terminology. I’ve read many books for which I’ve had to do my part in learning the language. I’m OK with that. It’s part of the compromise. In a sense, that balance is due to a practical limitation in language when it comes to very abstract things beyond the purview of typical human experience.

An example of such a compromise, for me, is that most recent post, which explains the one before it in a more digestible way. I can’t help but use the terminology of quantum theory in relation to my reference to parts of it, but at other points I use other terminology like “deterministic” and “cognitive model”, as those terms can’t be efficiently communicated using less specific ones, and there isn’t really a single word or short phrase that I can neatly integrate into a sentence that also allows me to relate the concepts to other similarly abstract topics, without it become more confusing rather than less.

Admittedly, I do use some terms just because I like them. For example “vectors”. I didn’t need to use that to efficiently explain the concept, but I just like the words. A certain about of creative use of language is a thing. I just like how it sounds. :smiley:

1 Like

I’m a long-winded person as anyone who has spent more than a few minutes with me knows. I’ll never forget this ridiculous moment at Burning Man last year in the middle of the night while I was on one of my rants in my friend’s trailer… he was between hits of nitrous and in the middle of one of my sentences he sang out in the style of John Mayer’s “Gravity”:

“BREVITY!!”

I’ll never forget that shit. Lives in my brain as a squatter forever, doesn’t even pay rent. This was my dipshit friend’s way of telling me “too many words bro”.

Now, in contrast to that, I’m going to read through some of the tldr here because ya know what @bfk, we all need to read more.

2 Likes

I asked chatGPT to summarize @psyber 's posts:

The author argues that many philosophical issues, including the idea of truth and the problem of Nihilism, arise from how our brains have evolved to understand the world in a “good enough” way for survival. These understandings are not necessarily accurate but are useful for getting by. As a result, we inherit and build on these imperfect models, which leads to ongoing errors in how we perceive and explain reality.

Instead of relying on logical reasoning to find truth, the author suggests that predictive models—ways our brains guess what will happen based on past experiences—are more accurate in relating information. They propose that truth is not an absolute concept but rather a measure of how well these predictive models match what actually happens.

The author also believes that our deep-rooted beliefs in certain concepts, like deductive reasoning and objective truth, are ingrained errors. They see modern advancements in understanding cognition and computation as tools to expose these errors, suggesting that many longstanding philosophical problems can be resolved or dismissed when viewed through this new lens. Essentially, our perceptions and beliefs are shaped by practical needs rather than a quest for absolute truth, leading to widespread confusion in society about what is real or true.

Might be an oversimplification.

1 Like

Haha, it makes a few errors, some quite glaring actually, but it’s not too far off of a decent summary… though it doesn’t really explain why any of it might be the case.

But yeah it is an oversimplification IMO.

Another reason I didn’t mention is that I also think justifying what’s being said is important. It’s more efficient to address the potential arguments against an idea that you’ve already considered, than to repeatedly engage in a back-and-forth with somebody going over the same arguments against it. The way I address potential arguments against it is to expound the idea by methodically building up from a foundation, and then reaching a conclusion.

1 Like

Haha, I know the one. Time and a place, I’ve come to learn. If I’m at a rave, I’m rarely interested myself in trying to explain a complicated idea. I’m just gurning my nut off, staring into fractals on the floor while getting down to some drum and bass or techno.

2 Likes

3 Likes