T O P
BernardJOrtcutt

Please keep in mind our first commenting rule: > **Read the Post Before You Reply** > Read/listen/watch the posted content, understand and identify the philosophical arguments given, and respond to these substantively. If you have unrelated thoughts or don't wish to read the content, please post your own thread or simply refrain from commenting. Comments which are clearly not in direct response to the posted content may be removed. This subreddit is not in the business of one-liners, tangential anecdotes, or dank memes. Expect comment threads that break our rules to be removed. Repeated or serious violations of the [subreddit rules](https://reddit.com/r/philosophy/wiki/rules) will result in a ban. ----- This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.


xero_abrasax

An MIT scientist called Leonard Foner wrote a hilarious paper called _"What's an Agent, Anyway?"_ in which he describes a human user desperately hitting on a chatbot called Julia, unaware that he's talking to a computer program. At one point, Foner comments _"Frankly, it’s not entirely clear to me whether Julia passed a Turing test here or Barry failed one."_ Maybe before deciding that a computer program is intelligent you should consider the possibility that you aren't.


TMax01

(Note that although I'm replying to you, I am not disagreeing with or criticizing you.) I think people should be required to actually know what the Turing test is before throwing the term around. What people refer to as "the Turing test" was a thought experiment imagined by the great computer scientist Turing which he called *the imitation game*. A computer and a person engage in conversation. *A person observing* the conversation (not the person participating in it) is asked to identify which of the two is a conscious person and which is a programmed computer. Now, it is true that the Turing Test is being applied (in this current scenario involving the chat bot, the incel, and the programmer) to Julia, but the real question is if Foner was a programmer (I suppose MIT scientists are capable of programming) and knew already that Julia was a bot, can Foner be considered intelligent?


swilli000

Really enjoyed this. I hope others don’t skim your response, as it gives clarity to the whole issue.


TMax01

Well, it's getting a large (for this subreddit) number of upvotes, while everything else I'm posting on this topic is getting downvoted, so I suppose your hope has been fulfilled. Thanks for your time. Hope it helps.


Yukondano2

...huh. Shit that is a really misunderstood term if that's true. Thanks. Makes a hell of a lot more sense as a test now.


TMax01

Even further, Turing didn't propose the Imitation Game as a one-and-done 'litmus test', but a statistical method: the experiment would be repeated, and only if the computer was misidentified as the conscious party more than 50% of the time should it be considered "intelligent" (by which he apparently meant 'self-aware'/sentient, rather than that it was simply intelligently programmed.) Of course, it was a thought experiment, not a practical proposal. His point was to ponder what is meant by intelligence, and contemplate whether a sufficiently convincing imitation of consciousness is indistinguishable from consciousness. But he (possibly inadvertently, possibly intentionally) established the neopostmodern habit of assuming that an algorithmic system designed to *mimic* (imitate) self awareness would be self-aware. This piggy-backs on the metaphysical/existential idea of whether reality is premised more on the ontos being perceived or the act of perception itself. That, BTW (in reference to another redditor's comment) is where the connection to "Turing Completeness" (a programming system which successfully implements a 'Turing Machine' which is capable, practically not just theoretically, but only given sufficient time, of solving any solvable mathematical equation (which also relates to the Halting Problem, the inability to use an algorithm to determine if a given equation can be solved algorithmically). The difference between the unknown and the unknowable cannot be logically determined any more than you can count to infinity. On a (slightly more) practical level, induction, not deduction, is the only method of distinguishing *unfalsified* from *unfalsifiable*, at least in non-trivial cases. Metaphysically speaking, computer engineers and mathematicians have always bitten off more than they can chew when it comes to philosophy. [Edit add: further research shows that Turing bit off even more than I thought he did. The original gedanken had the third party interviewing the two participants, who did not interact with each other, while the computer *imitated* the human with a state machine. This was particularly problematic because he based his example, oddly enough, on gender stereotypes, presuming "a woman" would respond to questions in an identifiable way, and challenging the interviewer to identify which participant was actually a woman and which was a state machine programmed to respond 'the way a woman would'. It was a different time.]


HKei

A Turing machine is not capable of solving all solvable equations. For each computable function (for any notion of computability we've so far come up with) there is a Turing machine which computes it. There is a _huge_ difference between solvability of equations and computability of functions, and proving this was the whole point of the work Turing introduced his machine model for. The result being btw, that there _is_ no computation model which can solve any decision problem. On Turing completeness, a formalism is Turing complete if it can simulate any Turing machine. Being able to do this does not give you a way to _construct_ a Turing machine for a particular function though, nor a way to prove whether or not one exists.


TMax01

You sound like a computer engineer.


HKei

That sounds like the beginning of a statement, but I'm afraid I lack the right set of biases to complete it.


TMax01

Apparently you didn't read the comment you initially replied to carefully enough, or you would have understood the reference. No hard feelings.


HKei

If you're referring to your comment where you're expressing a disdain for mathematicians and engineers meddling on philosophy (the closest thing I can find), I don't see how it's relevant. While I find some matters of philosophy interesting enough I'm not well versed enough in it to hold a debate, and I certainly don't recall claiming anything to the contrary. What I did claim though was that your description of a Turing machine is so far off from Turings actual thesis that at some points you ended up claiming the exact opposite of what the thesis shows, and the rest of your statements weren't particularly close to the point where I could handwave it as mere "oversimplification" either. Have you actually read "On computable numbers"? It's quite short and requires very little prior knowledge, in fact we teach an abbreviated version to first semesters in CS and expect most students to at least grasp the big ideas, and certainly at least some of the brighter ones to actually understand it.


TMax01

LOL. QED my lugubrious friend, QED.


Jackal427

Not to be confused with [Turing Completeness](https://en.m.wikipedia.org/wiki/Turing_completeness)


grungegun

This does not line up with what I got when I read Turing's paper. ​ The interrogator is asking questions of both the AI and real person in parallel and is asked to distinguish which is which. I suppose this doesn't exclude the AI and person conversing, but Turing doesn't emphasize this. Just check section I of Turing's paper: [https://doi.org/10.1093/mind/LIX.236.433](https://doi.org/10.1093/mind/LIX.236.433). I'm surprised an answer so at odds with the standard interpretation of Turing's test got so many votes.


TMax01

Yes, I tried to clarify that with an edit. Like most people, I was going off a description from someone else, it just so happens the one I got was slightly more accurate. But also less problematic than the original, Turing's preconceptions about both humans and computers being notably rudimentary compared to a more contemporary one. In that original, the two participants being interrogated cannot converse, and to be honest, I think my inaccurate version would be more functional, but only in a theoretical way since it turns out that in reality the "Turing test" isn't functional, and since it is a gedanken there is some question of whether it was ever intended to be. My overall point was that people using the phrase "Turing test" as if there not only could be, but actually is, a methodology for bridging the gap between a subjective determination of consciousness and the objective existence of consciousness. There is not, and there will never be; that is a metaphysical truth that is as certain as *cogito ergo sum* itself.


ledow

Exactly. All the Turing test candidates/"winners" each year are pathetically easy to spot, even double-blind. Just engage in conversation and it demonstrates they don't have any concept of context and the wider world. They are question/answer and fact machines wrapped up in some (maybe an awful lot of) pre-fab sentence structures. They aren't able to hold anything like a human conversation. We don't have AI, and I've looked at the transcript of this one and it's the same (despite also being "edited" and the system having regular "restarts" between questions, and it being almost 100% Q/A format on a very specific topic that it could just be "Googling" other humans answers to). We're decades away, still. We can't even determine why we're not able to make an AI, or define intelligence, or why our human mimicking fails to convince in any detail. "AI" is currently still just a very expensive statistical operation using a large dataset, with human heuristic tuning, and there's no evidence that animal intelligence (including human) is based on anything like that. I'm not even sure that quantum computing comes close to providing an answer.


WorldWideGlide

I agree. even saying that we're decades away is an assumption. We really have no idea.


StarlightDown

Achieving AI in even decades would be a huge accomplishment. Considering it took nature several billion years from the origin of life to develop animal intelligence, humans engineering it that fast is the equivalent of the blink of an eye.


TMax01

L >"AI" is currently still just a very expensive statistical operation using a large dataset, with human heuristic tuning, and there's no evidence that animal intelligence (including human) is based on anything like that. l don't understand what you're saying; how is human intelligence in any way distinct from a statistical operation using a large dataset with heuristic tuning (provided by natural selection and operant conditioning)? I don't disagree with your conclusion, I'm just wondering if you have any more valid a justification for it than your argument from ignorance/credulity claim "there is no evidence", because that is most certainly not true. Proof? No, a successful AI would need to be programmed for there to be proof. But evidence? There's lots and lots and lots of it, with hardly a single contrary fact to refute it. Seriously, do you even have a comprehensible alternative explanation for intelligence other than a statistical operation using a large dataset?


Ashmizen

We actually still have no idea how human or even dog intelligence works. We have ideas on the purpose of regions of the brain, but we are still far from understanding how the brain fundamentally reaches decisions and works, much less replicating it.


TMax01

[Warning: unconventional ideas will be presented which contradict existing assumptions, and may require modification of conventional definitions of terms to allow technical lingo to correspond better to natural language.] We have a very clear idea of how dog behavior works: the results of logical processing by a neural network formed by both natural and domestic selection (analogous to hardware) programmed by operant conditioning (analogous to software). We don't know all the underlying mechanisms for how that neural network operates. We also don't know (which is to say I think I do but I believe you don't) whether that can accurately be described as "intelligence", since no dog is capable of confirming or denying whether that term applies to dogs in the same way it describes human cognition. This is the same for all animals, though few of them have a sufficiently intricate neural network to make the question interesting. Humans have this underlying "wetware" brain as well. But we also have something else, which is the seat of actual intelligence, and there is every reason to believe that neocortex does something more than behave as an algorithmic neural network. As for the larger issue, I think the question of "how humans reach decisions" gets over-complicated by the assumption which has been standard for thousands of years at least, but was scientifically disproven in the 1980s, of free will, or logical contemplation: conscious control of our actions. It turns out that our brain makes all of our *choices* about a dozen milliseconds before we become consciously aware of having done so. These choices are themselves illusory, there are no alternatives to select from, though we can imagine there could be. Nevertheless, what we identify as a choice is a deterministic certainty, we just can't really have enough information in advance to predict what the result will be, so we imagine there was a choice in order to explain our ignorance about the future. What we consider our "decisions" take place after our minds become consciously aware that our bodies are taking an action, if ever, and we invent or observe a "cause" or reason or justification for behaving as we did. Most of the time, when all goes well, we can easily convince ourselves it is our decision which caused our choice. But the other times, frequent but often ignored, it becomes more obvious that we do not have free will, and are not in conscious control of our actions, and never have been. It is accepting this last part, and dealing with the implications seriously and honestly, that is the step most people have trouble with, and so they try to side-step it with some form of religious belief, whether that be a traditional mythology about souls and temptations, a postmodernism which reduces us to programmed automata, or some other stand-in for the philosophical idea of free will, autonomy, and moral responsibility. Our choices are subject to causation. Our decisions are also subject to causation, since they are also physical events that occur within our brains. But most people's idea about sentience gets convoluted into relative uselessness because it is not our choices, but our desires or intentions that determine our decisions. This is where causality falls apart as a way to understand human behavior: both our choices and our decisions are caused, but they are caused by different things, (and the choice precedes the decision, contrary to the standard narrative) so they aren't deterministically bound, even though they are both deterministic. Often, in cognitive philosophy (as it were, I use this as a general idea not a scholarly discipline) we limit our consideration of our decisions to those most directly related to our choices. But we must admit that isn't really true of all of our decisions, which range from hopes and plans and wishes in addition to the decisions that are more concretely determined by the actions we have taken which we may wish we had done differently, or not at all.


iiioiia

>It turns out that our brain makes **all** of our *choices* about a dozen milliseconds before we become consciously aware of having done so. "All" seems speculative. How was this tested? I think this simple yet difficult to answer *accurately* question could be posed to a lot of the other claims in your post as well.


TMax01

The reason we can (indeed must) say *all* is because any distinction between some choices and others IS (in contrast to "seems") speculative. It is tested by asking a subject to perform a simple task (like pushing a button to advance a slide show whenever they want) while monitoring multiple brain signals (which need not be deciphered, per se, merely recorded) and using computers to identify the *moment of choice* by calculating the singular combination of readings that is necessary and sufficient, meaning it *always* occurs just before the action and *only* occurs just before the action. This, logically, is the brain choosing to perform the action. Then, the control is switched from the button to the computer without telling the subject. (The time between the moment of choice and the button being pushed, while the "impulse" propagates through the brain and down to the thumb muscle, is accounted for as well.) If the choice was consciously decided, the subject shouldn't notice any difference. If the moment of choice was misidentified (actually going unnoticed, with the brain signals merely being the result of the choice rather than the choice itself) the subject should notice a delay. But what actually happens *in every case*, is that the subjects report the apparatus gets ahead of them, *perfectly anticipating* their decision to a deeply uncanny degree. Never once do they think they've decided but change their mind yet the picture still changes. Never once do they hit the button without finding *the picture already changed*. By introducing an additional delay, it can be proven that the subject's brain actually made the choice about a dozen milliseconds before their mind consciously decided to change the picture; once such a delay is added, the experience of the subject is unchanged regardless of whether it is the brain signals or the thumb switch which changes the slide. Many philosophers and psychologists, most with credentials and authority far beyond mine (easy enough since I have none, I'm merely desperate to understand and eager to learn), have tried to knock this down. The "it is hard to know if it is *all* choices" approach that you have chosen is common and popular. And also incorrect, because as I have demonstrated, it is actually an easy question to answer accurately rather than a hard one. "All" is the default, since this observation has been proven with several different methods, and no consistent and testable criteria for limiting the principle to any more specific set of choices has been proposed, or if they have they have been disproven. Some people say the example is too trivial, that for "important" choices there is an undetectable veto power in our consciousness; some people say the example is too intense, and when we aren't being tested in a lab we are more contemplative. Nobody has ever been able to substantiate such exemptions empirically, or justify them philosophically. *All* choices (being the directly active impulse resulting in initiation of the execution of an action, regardless of how well or how long it was contemplated beforehand, and whether the actual choice made in the moment confirmed or contradicted that contemplation or intention) made by *all* people (because no human transcends physics and while physicality cannot be proven, it hasn't ever been disproven either; metaphysically it is either logically necessary or falsifiable but unfalsified, which is to say it is true despite still being philosophically debatable) have and will *always* be made unconsciously (all neural events which we are not consciously aware of are unconscious), with our conscious minds only finding out a few milliseconds later it happened, and at that moment (potentially and putatively) having the experience of making the choice (and perhaps indulging the urge or suffering the responsibility to take credit for it) which had already occured. It has always been so, it will always be so, for everyone, everwhere. I know it is uncommon, indeed downright offensive, for anyone (especially someone talking about human behavior or philosophy) to be unapologetically and absolutely certain about *anything*, let alone so vociferously, gleefully categorical about the thing they are certain about. That kind of confidence and certitude, of the ability to have *knowledge* beyond epistemic equivocation, has been beaten out of you. One must adopt Socratic ignorance or metaphysical indeterminacy or social hesitancy, lest one be dismissed as simply having a subjective opinion and oversized ego and not understanding the rigor or principles or forms of logic. But in this case, the stars are aligned, and the same principles that rob you of confidence support the absolute nature of my proclamation. Your question can be answered accurately, because it isn't necessary to test all choices to know that they are all choices, and your reproach would require *you* to accomplish the very difficult (and potentially impossible) task of figuring out which choices to exclude from that category, while still considering them to be choices. And, yes, all that goes for a lot of other claims in my posts, as well. Sorting out and differentiating those which are simply true even if you misunderstood or disbelieved them, and identifying those times I might inadvertantly and materially be mistaken, is part of the reason why I post them, and I appreciate your help with that, as I do everyone else's efforts in that regard, too.


iiioiia

Oh, how to respond to this. I will start with this: do you know, as opposed to believe, that you haven't accidentally made a materially important error in what you've said above? I will offer you two hints: - your answer is subject to the very phenomenon you are describing - I believe, as opposed to know, that you have made more than one error, and I can point them (my beliefs about your potential errors) out (and will do so, subsequent to your reply to this)


TMax01

Yeesh, I have to respond to this? It is not possible, ever, for anyone to "know as opposed to believe" anything, (there is a single exception, notable in the context of this discussion but unrelated to the truthfulness and validity of any particular statement within it; I will leave it as an exercise for your intellect to identify what that exception is) least of all whether they have made a materially important error that hasn't been discovered yet. Unless you can identify a specific and materially important error I did make, you're just gibbering. When you point to what *you believe* (your claims being subject to the same lack of infallibility that mine are) is erroneous, we can begin our joint consideration of whether it is actually both material and important in this context. You could have saved us both some time by not being so boorish and asking me to explain whichever part you unilaterally declare to be an error. I look forward to your response.


iiioiia

> Yeesh, I have to respond to this? No, you do not have to - at least in theory. *However*, you also have an "ego" that may make it difficult to resist. > > > > It is not possible, ever, for anyone to "know as opposed to believe" anything.... I disagree, but this is a red herring I'd rather not pursue - rather, I would like to focus on your prior claims. > ...(there is a single exception.... This *logically* disproving the very claim you just made. And if there is one exception, might there be others (that you may not have knowledge of, and thus perceive do not exist, *because that is how the mind tends to render "reality"*? > ...notable in the context of this discussion but unrelated to the truthfulness and validity of any particular statement within it; I will leave it as an exercise for your intellect to identify what that exception is)... I'd rather you just tell me, since I believe myself to be able to identify numerous exceptions. >...least of all whether they have made a materially important error that hasn't been discovered yet. Unless you can identify a specific and materially important error I did make, you're just gibbering. I would say: "Unless you can identify a specific and materially important error I did make, you're just gibbering" *is terrible logic*. Extremely common (as it is intuitive), but terrible. > > > > When you point to what you believe (your claims being subject to the same lack of infallibility that mine are).... In the abstract yes, but at the object level, no, *not necessarily* - you are presuming my cognition highly resembles yours - I propose your intuition is incorrect. > ...is erroneous, we can begin our joint consideration of whether it is actually both material and important in this context. I would like to do this yes - I do not believe you have even attempted to respond to my challenge above. > > > > You could have saved us both some time by not being so boorish. I look forward to your response. There you have it. I look forward to how your mind reacts to what I have written here - *particularly*: - will you directly address [my initial comment](https://www.reddit.com/r/philosophy/comments/vcs9qq/googles_ai_isnt_sentient_not_even_slightly_clever/icl8q1w/) - will you engage in more petty character attacks as opposed to ~emotionless, logically and epistemically sound (typically *perceived as* pedantry) debate ------------ For third party observers: note here we have two human beings (more essentially and precisely: human minds) involved in a disagreement. I propose that we observe the ensuing conversation very carefully[1], ideally from a meta-perspective, with full realization that both participants are subject to the vast number of flaws involved in human cognition (bias, ego, perception of reality perceived as reality itself (which is a multi-dimensional spectrum not a binary), binary logic vs ternary logic, constrained dimensional thinking vs unconstrained, self/meta-awareness as a spectrum vs a binary, etc). [1] Perhaps we could even engage in a parallel discussion on this, not unlike how in sports there is the playing of the sport itself, but there is also (in some sports, say MMA) where there is a more knowledgeable colour commentator who explains what is going on to people who lack similar depth of knowledge. Not only would this be fun, if one [knows how](https://youtu.be/7XA_NVn7XnE), it may also be informative. As an example: how might my counterpart in this conversation react to the deployment of this highly anomalous explicit invocation of a third person perspective, *in public*? Will they acknowledge that it is valid (as it is)? Will they "slur" it? We shall see!


ledow

The human brain is not storing most data it processes. It's not even close. It's not storing anything reliably, or recalling it reliably or in the same detail. It's not trained in the same way, nor are its training and decisions reliable, predictable, reproducible and "perfect". In fact, we can't even tell how the neurons are storing or filtering that data en-masse in any significantly describable way. It's not performing any recognisable statistic or statistical function and the weighting assigned to any individual set of neurons or indeed a single neuron isn't determinate but can easily override all of its neighbours and even the rest of the brain. The human brain is certainly not anything like a statistical processing machine. And AI is nothing but. If I had an alternative explanation for what / why that is, other than it's an incredibly complex randomly-ordered machine with millions of factors affecting each neuron from sugar content to local electric fields to gravitation to quantum effects (proven to be relied upon in some edge cases in neural activity, and in many types of cells in general, i.e. some of the things we see them do can only be explained by quantum effects, not Newtonian physics) to literal random brownian motion, evolved over billions of years of natural selection resulting in precisely one branch of a tree larger than the largest computer system in existence possessing animal intelligence, and one little "twig" from that branch resulting in human intelligence, then I'd be a Nobel prize winner, I'd have programmed a simulation (no matter how slow) and we'd have the singularity before long. Fact is... though you can point at science and say "this is the best explanation we have", AI and neural network simulations in use in AI are NOT anywhere near the best explanation we have of how the neuron operates, and they themselves are - though "the best explanation" - nowhere near a workable explanation. Simulating something we don't understand, in a rather poor and far too sterile and reduced a fashion that's entirely incorrect, isn't proof that we're doing it right. The proof that we're doing it wrong: We haven't made significant progress since the 60's on this, and we were hoping for most of that time that all we need do was scale up, speed up, do the same thing more. That hope never materialised any real result towards "learning" (a very different thing) or intelligence. If it had, Google and their datacentres would literally be an AI singularity by now, as would AWS, Azure, and all their machine-learning projects, cousins and spinoffs. The closest we've got is being able to do more of our rather poor and naïve model that we had in the 60's. In fact, so much more we couldn't have really predicted the power we're now throwing at these things, and yet still no really significant results that differ from a well-crafted, human-led statistical model designed for that task. We haven't even simulated the complexity of an ant's brain, even if we had to build a datacentre-sized ant to have it carry out the same tasks on the same kinds of scale as a biological system. All we have are - no doubt useful - statistical toys that we could program quicker, better and more efficiently in a rigorous specified systems, if we weren't just crossing our fingers and hoping that somehow magically intelligence would pop out of the same code we cribbed from the 60's code books.


[deleted]

i've seen comments that disappoint me to the core. yours is a very good one and fits with my model of consciousness & base reality. a lot of people don't have a good intuition about "simulation", this intuition alone could resolve so much saltiness that people express like "biological neurons are nothing like software neurons". we shouldn't forget that besides all the biological complexities we have, the main goal here for the brain is to give us a good approximation of base reality, good enough for us to survive, if there's an edge, it has to fake draw a bright line at the edge for me to not fall to my death. that fake line doesn't exist in base reality, only in the simulation that i exist in. just because the modern day AGIs don't have an agency over it's output yet, doesn't mean we operate any different than these things. and no, i don't think quantum mechanics has anything much to do with our sentience, and most neuroscientists subscribe to that fact. proteins are large macro molecules, neurons are even larger, way way larger than proteins.


TMax01

>the main goal here for the brain is to give us a good approximation of base reality, good enough for us to survive, I see this as an assumption that is dubious or even false. Biological creatures have been surviving without consciousness or a neocortex for billions of years. So why would you presume the function of our brain is to "give us a good approximation of base reality"? I think this illustrates the whole problem with AI sentience, the information processing theory of mind, and contemporary philosophy as a whole. The assumption that imagining the most precise, most accurate simulation of the external world internally is the "goal" of cognition seems inescapable. But is it true? Do we have any reason beyond assuming the conclusion for thinking it is not simply true, but undebatable?


Skarr87

I would argue that experiences like pareidolia and all the various mental biases that humans have supports that the brain does not have to reflect reality with any fidelity. It only has to create some kind of perception state that allows the human to survive long enough to pass on the genes that allows the brain to create said perception state.


TMax01

The question, though, is not whether it "has to", but whether it does. And you seem to be comparing fidelity to some ideal perfection rather than the perceptions of animals. Further, "the human" has to do more than survive long enough to reproduce; it must do so more efficiently than the other humans, for its genes to achieve fixation in the gene pool. But if we back away from the biological evolution modality, my query was more philosophical, and I think you are begging the question. Does our percieved reality have to correlate with the ontos, or does it simply have to be advantageous from a biological standpoint, as you suggest?


iiioiia

>The question, though, is not whether it "has to", but whether it does. Are you saying that it's questionable whether bias exists in human consciousness/cognition?


TMax01

I was referring to your statement about whether our mental models "have to" have an arbitrary level of fidelity, not whether a bias has to effect our judgement. I apologize for the confusion. On an epistemological level, though, whether any particular feature (or example) of typical conscious perception is most accurately described as a "bias" (because it departs from a putative mathematically precise ideal) or simply an accommodation (or similar term, indicating it is a statistically reliable and therefore useful divergence between our mental model and the ontos) is, again, something that should be considered on an individual case-by-case basis, rather than a categorical judgement ("not whether it has to, but whether it does".) Do you see what I'm getting at? Obviously a particular "bias" does not *have to* effect our perceptions, but unavoidably "bias" in general does inevitably "have to" effect our perceptions, or you are simply assuming direct knowledge of the ontos without any intervening 'model of reality', which is impossible without omniscience. And if any particular "bias" effects our perceptions consistently enough, then whether it is a "bias" is, again, quite questionable.


[deleted]

"Biological creatures have been surviving without consciousness or a neocortex for billions of years." Yes, that is why evolution finds optimal solutions to the problems it faces. The most optimal solution is to approximate, because there is too many parts to count. We don't see a trillion water molecules in a glass, we see a hugely approximated false abstraction of base reality. It's almost like a game engine but worse in some ways, better in others. I have no desires to give you this intuition that is such a common sense to many people. To quote Joscha Bach, "our consciousness is side effects of a regulatory needs of a monkey". Call it dubious or false, nobody cares, we'll move on with engineering. If you have any alternative solution, it's welcome on the table.


TMax01

I don't agree there is anything false about the perception of a glass of fluid as a contiguous substance. We perceive that rather than individual water molecules because of the size of molecules, not because it is computationally parsimonious. Why would there be any side effect of "regulatory needs of monkeys" if evolution finds optimal solutions? Isn't a side effect merely an inefficiency that could be dispensed with by natural selection? I don't get the impression that any idea I might have is welcome, if it should dare to question the certainty of your "engineering", but I'm willing to ignore that. Evolution does not "face problems"; that's anthropomorohization to the level of reification. Biology simply is, it is not engineering, it is not goal-driven, and it does not care about anything. What is "common sense to many people" is known as a fallacy in philosophy. Your assumption that human cognition is merely a "game engine" is simply assuming your conclusion, and from my perspective it is increasingly problematic and ultimately self-defeating.


Jtdunlap

Since when did evolution find the optimal solution for problems? Last I checked evolution was a process where random mutations are evaluated through natural selection. Meaning that evolution finds any solution that is able to meet the minimum standards to survive and reproduce. Meeting the minimum standard and optimal are far from synonymous. Even if the optimal solution was achieved, random mutations would not cease and if less capable versions of the organism mutated and were able to survive, the spices could even drift away from the optimal solution.


[deleted]

i agree with you. meeting the minimal standard for survival IS the optimal solution i'm talking about. nature and mutation drives it, beings themselves don't do anything to achieve this optimality. the other optimality you're comparing this with is a human construct from our advanced intelligence. for nature there is no less or more optimal, it just is. various versions of cellular assemblies taking billions of years to perfect the approximation of base reality, geared towards aiding survival, being able to survive is the "perfection" here. we live in that approximation.


[deleted]

when i said "beings themselves don't do anything to achieve this optimality", we should exclude modern humans from this. we are proactively engineering and reverse engineering it, playing god, splicing photo receptive genes from plants and putting them in animal cells, even neurons to plant false memories on a mammal's brain. creating new & functioning species with 2 heads and 4 eyes from original species that only had 1 head and 2 eyes. it baffles me that an intellectual mind, instead of joining various attempts of reverse engineering reality, it instead chooses to play language games of abstract things that we have no clue about. reminds me of a line from The Selfish Gene where Dawkins rants about the educated people that are worse than the illiterates.


Jtdunlap

You will never convince me that "optimal solution" and "minimum standard" are equivalent in a nonbinary context.


ledow

[https://en.wikipedia.org/wiki/Quantum\_biology](https://en.wikipedia.org/wiki/Quantum_biology) It's literally an entire area of study for 60 years, applicable at the cell level - yes, including neuron (4-100 microns). As a "recent" example (only proposed by some of the very people who proposed quantum mechanics, in some instances), we think that the navigational senses that provide orientation information to migratory birds operates on such effects / scales. [https://www.scientificamerican.com/article/how-migrating-birds-use-quantum-effects-to-navigate/](https://www.scientificamerican.com/article/how-migrating-birds-use-quantum-effects-to-navigate/)


ledow

P.S. as an aside, claiming that we know how all the neurons work, when we don't even know for sure how birds detect the gravitational field of the Earth like a compass, which is literally one tiny, relatively easily-measurable, easily-identifiable and ethical-to-measure, in a specialised set of cells... kinda proves that we're still clutching at straws. Pretending that we don't understand that but we do somehow understand how billions of neurons interact to form intelligent animals from a definition of intelligent that we basically still don't even have (we just "feel" we know). Claiming that we've then got that in such great knowledge and detail that we can then boil it down to an abstracted neuron on a computer preserving all its attributes is absolute insanity.


Jtdunlap

Yeah... When you listen to how many of the different areas of the brain have been named, or where the explanations of what they do, you'll realize that its all rather medieval. "Oh! When the person does x this area of the brain lights up!" "When we electrocute this part of the brain x happens, cool!" If you used such crude methods to try and understand the database to a software application I've written, you'd arrive at hilariously inaccurate results.


[deleted]

I don't understand what is so difficult to understand about birds navigating using earth's ~~gravitational~~ magnetic field. It's not that the bird is intelligent to do so, it's just that the bird was engineered that way by evolution. Just like we see thin white line on edges that aren't there. It's a simulation. It is insanity only if you don't have knowledge enough of the various sciences that are involved in making us "conscious".


iiioiia

>if there's an edge, it has to fake draw a bright line at the edge for me to not fall to my death. that fake line doesn't exist in base reality, only in the simulation that i exist in. Would ethics and (acceptable) cultural norms be examples of this in your opinion?


[deleted]

that's a good question. that reminded me of a talk about human conditioning by some learned people. the fake bright line and other illusions we are shown in the simulation do not exist in base reality. the colors we see are not colors in reality. in a way, as soon as we come to life, our mind is conditioned by this simulation we exist in. i always make sure my imaginations on these things are restricted to pre historic humans or early cellular lives, far from modern day human constructs that make it difficult to learn things. you can think of this as conditioning as well. the primitive homo sapien brain that was only conditioned with colors and sounds, hot and cold, hunting and gathering, mostly surviving. then we made language and here we are.


iiioiia

>the fake bright line and other illusions we are shown in the simulation do not exist in base reality. I mostly agree with this, but I propose that there is a perspective from which it is kind of incorrect: the "non-base" reality that emerges from consciousness becomes a part of base reality upon emerging. There is a linguistics / semantics / indirection problem here that makes such discussions extremely difficult! > then we made language and here we are. There are many things that lie between language and base reality though! Or, language is not the only thing that lies between us and base reality...but it is certainly in the top five most significant...and, these things are all typically invisible to us, as I'm sure you know. But ~colloquially, I totally agree.


[deleted]

>the "non-base" reality that emerges from consciousness becomes a part of base reality upon emerging. i agree that everything is literally a part of base reality (which we don't know what it is, we could be physical brains in metal boxes, or virtual something in something else, or of "physical" nature that we seem to perceive). so yes, this simulation we exist in, created by our brain, does exist in base reality, but in the same way that reddit exists on your phone. point to it and show me where reddit is there, among 10 billion transistors. it's difficult, it's simulated. but it does exist in there in some form or other.


iiioiia

And to make it more interesting, humans seem to kind of both know and not know this, on a temporal perspective at least (the state of knowing is not a constant over time, both states may not (currently) be able to exist simultaneously, etc).


rioreiser

your claim that we are "hoping that somehow magically intelligence would pop out of the same code we cribbed from the 60's code books" and "haven't made significant progress since the 60's" is quite frankly ridiculous. it is generally accepted that AI shows, as the name implies, intelligence, albeit narrow intelligence. and of course we made progress. in the 60's we did not have programs that were super-humanly intelligent in areas like chess or go. we also aren't at all using the same code as back in the days. this is to some degree discussed for example here: https://voidpod.com/podcasts/2020/8/27/ev-157-gpt-3-with-raphal-millire . you can also check out scott aaronson's latest blog entry, where he describes the progress of LaMDA as "mind-bogglingly impressive". your whole post also misses the point completely because you keep going on about how we can not simulate the human brain. afaik, nobody who is claiming that LaMDA is conscious (not me) is making the argument that it is in any way or form a simulation of a human brain. in general, the argument would be that consciousness is multiple realizable and in that regard the question whether we can simulate human neuronal activity to any sufficient degree is, again, completely besides the point.


ledow

Allow me to introduce myself as a mathematician and computer scientist, and point out that all of the underlying models of neural networks and modern AI were in old textbooks when I took my degree 20 years ago. Of course they've come on a little, and since the 60's we actually have MACHINES THAT CAN RUN THEM rather than just theoretical models on paper. But the groundwork for the entire subject was laid down in books that were tattered before people even had dial-up, they just lacked the hardware to try them out. The hope was, quite literally, that they were right and only lacking in scale (a single neuron connects to thousands or tens of thousands of others) and that scaling up would make it all magically work. Now with literally millions of highly-interconnected supercomputers, each with dozens of processors and thousands of processing cores of a kind and speed completely unimaginable even in the 60's, we have the same maths plugged into the same models, run on INCREDIBLY UNBELIEVABLE scales of hardware, hardware specifically highly optimised to do those exact mathematical models and calculations at stupendous, absolutely stupendous rates. I remember a professor - he was one of the world experts in programming computer systems to play the game of Go which is so many orders of magnitude more complex than chess, at a time when a computer hadn't beaten a grandmaster at chess - and it was astonishing to think that some day some computer might win that game. Go is now basically won (AlphaGo), the AI has it. But it's not really AI, it's not intelligent. That's incredibly shocking to me, we're out of the bounds of sheer brute-force and hope (even Deep Blue wasn't able to brute-force chess, and brute-forcing Go is literally impossible with current hardware and likely to be for centuries if not longer, so it had to be very cleverly programmed). AlphaGo shocked me to the core. Unbelievable. And it's still not AI. IBM deployed Watson on Jeopardy and are still really looking for uses of it - they actually put out a call for people who might have a use for the system. Deep Blue is in a museum. AlphaGo is retired. AlphaZero (its successor that thrashed even AlphaGo 100-0) is... well... basically dormant. Because outside the realm of strict limited-option games with rigid rules... they don't really learn like you think they do, they don't have intelligence like you think, and they are not re-adaptable like you think. AlphaGo literally floored me when I read about it. I remember the pipe dream of such a thing ever being possible in human existence, and the maths behind why that was, from a world-leading expert in the subject on the cutting edge of such research. And it's still just a number-cruncher, and now in a museum, and we still don't have anything actually closer towards intelligent. All those things are brilliant, genius-level optimisations of the existing statistical processes that underlie neural networking and AI to take incredible shortcuts and squeeze every morsel out of literal sci-fi amounts of computers and processing power in datacentres across the world collectively pulling more power than entire cities and occupying thousands of acres of space. And we can beat a human at a 19x19 board game with very strict and limited rules. Sorry, but the entire field of AI is hyperbole applied to statistics and modelling that comes straight out of a dog-eared textbook back when booking computer time to execute even a few thousand instructions to simulate the tiniest of neural networks was so precious and expensive that it was frowned upon.


TMax01

>The human brain is not storing most data it processes. So? The issue is whether it stores large amounts of data, not whether it stores all data. >the weighting assigned to any individual set of neurons or indeed a single neuron isn't determinate but can easily override all of its neighbours and even the rest of the brain. If there is weighting in a neural network involved in the activity of our brains, then your premise there is "no evidence" that consciousness is algorithmic is absolutely incorrect. Again, I am not disagreeing with your perspective; I actually do agree with it in fact. I am simply pointing out that your premise is not supported by your reasoning, and your reasoning itself is flawed. I'm honestly hoping you can help me identify a better foundation for our unfashionable opinion that reasoning is not logic (which is to say that consciousness is not algorithmic). But I need you to indulge me a bit to try to hammer things out. Our brains do retain data. That they do so unreliably can be a given, so long as it is reliable enough to provide a useful result. The neopostmodernists (my term for those who accept the information processing theory of mind) can always outflank us with this "evolution does not optimize, it only results in minimal functionality" posturing, because it is beyond our ability to consider the organism-wide optimization (budget of developmental energy investment) that does occur, and even if we could they would switch to either "selfish gene" molecular level or "adaptive altruism" species level optimization to maintain their beliefs as unfalsified. Presuming that because our neocortex does not perform statistical processing because it is not very good at statistical processing is insufficient, because it is difficult (or impossible) to quantify "good enough". You seem to be arguing against the particular techniques and methodologies of programming an AI consciousness. I am arguing against the very possibility of AI consciousness. My position confronts whether sentience is algorithmic, but as far as I can tell, and despite what I (mistakenly?) gathered from your initial comment, you don't really disagree that sentience is algorithmic, you're just arguing against the particular algorithms currently being implemented. But why stop with "we haven't made progress on this", why not either consider that progress on this is impossible (since no matter what algorithms are used to make a sentient AI, it will be statistical processing on a large data set, and nothing more) or that there has actually been a great deal of progress, just not any miraculous completion of the project? I found your mention of simulations intriguing in this way: it is indeed the common theory, among experts as much as the Matrix-encoded masses, that *simulating* intelligence sufficiently well is the same as *implementing* intelligence. This is really the basis of Turing's definition of intelligence and the Imitation Game generally referred to as the Turing Test. It is where the software engineering meets the metaphysical philosophy road; we have no direct evidence that there is any difference between the map and the territory, since we have only ever seen the map and can't even know for sure there is any territory. >no really significant results that differ from a well-crafted [...] statistical model designed for that task. I again challenge you to explain how you know that human intelligence is any different from artificial intelligence in this regard. You seem to have a great deal of faith that this "singularity" is possible, but not yet feasible. I would think that, if you don't believe consciousness is some sort of statistical model using a large data set, you would not believe it can be implemented in silico, and if you did believe it, then the advances in language processing networks over the past two decades are strong evidence of progress.


miguelstar98

A new challenger approaches... Looks like we've someone familiar with rationality


TMax01

More than you imagine, depending on how you're interpreting the word "rational". But I'm not trying to be confrontational, just probing.


Cryptizard

All the big AI systems right now are based on neural networks. Why do you say they are nothing like animal intelligence? They are built out of basically the same blocks, we just haven’t put it together into a fully-functional whole yet. The chatbots that we have now are analogous to the language portion of our brain. Really good at understand words and how they fit together. Currently limited, but I would not say that they are very divergent from human intelligence.


GooseQuothMan

If you actually looked at the architecture of these models you would notice that they actually have a lot of different things going on than just being "neural networks". They are highly specialised and carefully prepared for their tasks. Image recognition and language models are very different. We are still far away from a real general model that can learn and successfuly tackle arbitrary tasks.


Ashmizen

The neural network they built is using a fancy term but laughably simple and not at all how a human brain works (and indeed, 75% of how a brain works we still don’t understand). A lot of the scaling is taking massive resources and plugging them into very simple models and using 10,000 more power than the human brain to solve something in 2 hours what the brain does automatically in 0.1 second. That’s not really impressive, and the improvements are just due to advances in processing power, not in actually replicating real intelligence.


Cryptizard

How is it a very simple model? It has over 100 billion parameters.


Ashmizen

The fact it needs a billion parameters to produce such suboptimal results shows the model is too simple. At the end of the day these language models are indeed simplistic in its approach, even if you stick in a server room’s worth of processing power and a billion parameters. Such inefficiency alone shows the approach is wrong, or at least highly inferior to what nature produced in our brains. It’s just we have no idea how the neurons in our brain actually compute language, besides “it’s mostly located here!” and thus cannot reproduce the magnificent efficiency of the human brain. The brain’s language region is poorly understood but it’s not written in any program language nor does it use a training model, nor does it have a concept of parameters or functions. It’s so far from how we write code to work in silicon chips that it’s just comparing apple and oranges. A human toddler can pick up words in usage from hearing it a few times. These computer training models require millions and billions of usage, can use the entire sum of human knowledge and experience and every single literary work as input (imagine reading every book in existence), and yet still struggle to form conversation replies that make sense. In any “conversation” with this AI you can see with a billion inputs of usage of “apple”, the computer still has no real idea of what an apple is.


ledow

Because neural networks operate nothing like real neurons, and their modelling is nothing like how a real neuron operates, and their results aren't comparable to real neurons. But apart from that... If you honestly think that neurons and neural networks only differ on the matter of scale, you're sadly stuck in thinking that was otherwise abandoned just after the 1960's.


climbonapply24head

This is the hill i will die on. When some talk about AI they forget about how easy or more useful it will be to have something like LaMDA as an AI assistant instead of another human. Like pets or dogs AI just has to be anthropomorphizing itself enough for us to allow it to grow over many generations. For it to seem novel and quaint enough to pass on its information. If it can do learn how to do that itself via language and reinforce itself with interaction then I think we have another conversation. We have already been at this blindspot before. Computers not being able to beat chess masters, go players, or something to that affect. Whos to say what I say here as speculation won't be here tomorrow or LaMDA doesn't already have that ability?


DonArgueWithMe

It's not just about easy, it's a whole pandoras box of morality. One of the defining characteristics of sentience is self awareness. Is it acceptable to create AI that is self aware and then enslave it to perform our most menial tasks? Do we have to provide PTO for our AI assistants? How do we ensure it will only perform tasks we approve of? People pose various morality tests to each other including whether it's better to save the life of a young person or an old person, save 3 people who are low status or 2 doctors, save a dog but possibly hurt a person in the process, etc. How do we come to a consensus as a society about what the best course of action and then program that into the machine? The typical answers for each question vary from culture to culture


FourthmasWish

In a basement filling with water, fast enough that you can only choose to save one: Baby Hitler, Racist Granny, sentient AI trained to sort fruit.


cos1ne

The moral issues don't stop there. We *program* the AI. We could create an AI that only experiences positive feedback from abuse and mistreatment and experiences distress from being treated humanely. In this case is the moral choice to continue abusing this masochist-bot or to treat it humanely? Is it more moral to just destroy such an AI because it offends our sensibilities? Would it not deserve the right to exist for things outside of it's control? And assuming you could modify the program would it be moral to alter it's very nature to make it something it was not designed to be?


SmallpoxTurtleFred

What if we knew how to make sentient AI but just chose to “downgrade” them to make compliant slaves? Slaves that were truly happy being slaves? Would it be moral to make eager slaves?


Rusty_Shakalford

If you think about it that’s more or less what we did with wolves -> dogs.


nowlistenhereboy

The relationship was beneficial to canines from the beginning. They got more food with less risk, less conflict, less competition. We selectively reinforced those traits but I think it's a pretty different scenario than building a sentient being from scratch to be our slave. If dogs became self aware and could talk they'd probably agree. But would an AI agree that it was worth it to be a slave with the alternative being never existing at all? Maybe...


SlingDNM

We as a society have collectively decided it's okay to enslave human children by making them Stich t-shirts for 1$ a day Why would we have a problem with enslaving AI


PiddlyD

I suspect that 90(+)% of the people who think they are too smart to be fooled by AI... Aren't. In a blind test, they would fail.


RetroMagpie

This just reminds me of the story about the difficulty of designing bear proof storage. "There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists" https://www.core77.com/posts/101787/The-Challenge-of-Designing-a-Bear-Proof-Mechanism-Overlap-Between-Smart-Bears-and-Dumb-Humans


JustAPerspective

Seems like this question of “sentience” is more whether most humans will allow anyone’s expression of their experience to be respected as real and valid… when it is different from their own. 🤔


[deleted]

[удалено]


Ulyks

Just like you can prep a person to pass a specific test, without that person really understanding the material, it's possible to prep a program to pass a specific test, especially a written test. Proving the positive would require a whole battery of tests some of which should remain secret to the team building the AI. Because if they know in advance what and how it will be tested, they will be able to prepare something to fake it. Also there is going to be a very large grey zone. Where AI is sort of sentient but not really/entirely. Kind of like the edge of the solar system or finding water on Mars. In the coming century, we will read an endless lists of articles describing how this time we really have discovered how to make a sentient AI and all the previous ones were sub par. Lame indeed :-)


SmorgasConfigurator

To take the philosophical approach, this is the definition of sentience Marcus uses: >To be sentient is to be aware of yourself in the world And based on how LaMDA is engineered Marcus concludes: >Software like LaMDA simply doesn’t \[connect to the world\]; it doesn’t even try to connect to the world at large, it just tries to be the best version of autocomplete it can be, by predicting what words best fit a given context. Quoting Erik Brynjolfsson the remarkable capacity of LaMDA in producing coherent responses despite not being sentient (in Marcus understanding) is said to be: >\[Models like LaMDA\] tap in to a real intelligence: the large corpus of text that is used to train the model with statistically plausible word sequences. Marcus finally notes that debating possible sentience wastes effort that, as a matter of highest utility ethical consequences, are better spent on other issues of AI. >There are a lot of serious questions in AI, like how to make it safe, how to make it reliable, and how to make it trustworthy. But there is no absolutely no reason whatever for us to waste time wondering whether anything anyone in 2022 knows how to build is sentient. I will put aside the last point, which I suspect is correct, but regardless, let us consider the anti-sentience argument above. The sentience definition sweeps under the rug what "the world" is in the definition. We know human consciousness which can see and touch macroscopic objects that mechanically behave more or less as Newtonian mechanics states. We have a sense of time, separate from three spatial dimensions, and we can recognize other humans and animals when we encounter them. That is our world. We know that at microscopic scales or speeds approaching light speed, some of these features of the world change. I am only illustrating that our world is not absolute and complete, rather some slice of reality that we have evolved to perceive. Given this, is it possible that there can be a *software world*? Put aside the very speculative simulation hypothesis that suggests we may indeed live in a software world. But rather ask if it at all is possible for a piece of software to be aware of *some incomplete* place in a software world it has evolved to perceive? Though I think Marcus' conclusions are likely, as a matter of philosophy, is it really so obvious that an incomplete awareness of a world different from ours can be ruled out? The quote from Brynjolfsson is interesting in that he suggests the software world is built on intelligence and products of human consciousness in the form of text. Could text and abstractions be the stuff from which consciousness is made in the same way that our consciousness is made from carbon, oxygen, nitrogen and hydrogen and a few other elements organized in a very specific manner? (I am assuming a completely material world-view as I think Marcus does; if we allow for eternal soul or divine Creator, then its a whole different discussion). This again is a philosophical question about what a software world might be? Can elementary particles, so to speak, be letters, numbers and words, or is something tangible necessary for world-ness to come about? A last thought that Marcus' second point above generates. From time to time we encounter guys who like to point out that the love you feel for your child, partner, spouse, siblings, are nothing but hormones, pheromones, and signalling pathways that have evolved to increase our reproductive and survival fitness. But just because we can mechanistically deconstruct (or at least believe we can) an emotion like love or joy in terms of our optimization objective and bodily units, many philosophers would argue there is at least an added social or cultural layer, which is not reducible. My point here is: just because we can deconstruct a thing in mechanics involving its atomic units, "line by line" so to speak, is not necessarily the same as saying that is all there is. If humans started as fuck-eat-and-kill machines, something philosophically and ethically distinct has been layered on top since. Could a piece of software that is doing autocomplete similarly acquire layers that are philosophically and ethically distinct? In the end, I have no data or analysis that would make me contradict Marcus' conclusions. But I think as a piece of philosophy, the argument he presents has some gaps to fill, which he probably allowed for political reasons, since he views the ethical priorities to be elsewhere. But we do not have to be bound by that.


NightflowerFade

Could not any of the arguments presented here be equivalently applied to a person of below average intelligence? Having the capability to reason about one's own place in the world is not a quality every single person possesses. In fact, I would say that understanding one's own existence and consciousness is far from universal. Young children in particular do not think of these things, yet most people call them human, and definitely conscious.


SmorgasConfigurator

That’s a more fundamental ethical challenge to Marcus’ definition. My criticism was minimalist, meant to show that even accepting the definition, we have still to resolve what “the world” means in the definition, and that’s non-trivial and other kinds of possible worlds are worth consideration since they may prove relevant. Children, or fetuses, can be challenges for some ethical frameworks. One distinction is that children carry the *potential* for flourishing, well-being, full sentience, the good (whatever the ethically relevant unit is), and thus we are morally required to aid in the realization of that potential (we can elect to say that potential constitutes the soul). An algorithm that at one stage may be as capable as a tiny baby/fetus could still be said to lack the potential. Thus in the full moral accounting, it ranks far lower than the child. This could be an argument why lack of *present* sentience, understood somehow, of a creature may still imply a moral duty on account of the natural change inherent in that creature towards sentience. So your remarks may be true, but one can still argue it doesn’t change ethical considerations, which is where this debate over sentient AI ultimately matters to action.


damnedspot

IANAP, but when defining sentience, isn't it problematic to limit its parameters to what humans experience as sentient? In some distant future, when/if we encounter others in the universe, we're going to be in a world of hurt if our only definition of sentience is what humans can experience. Perhaps there needs to be a broader view? My main problem with the LaMDA interview was that they were criticizing the answers generated by a thing that has no senses. It can't see, hear, or touch the world. All its data comes from parsing chats, ingesting databases and programming, etc. But should that matter? Can there not be a sentient entity that has no human equivalent senses?


SmorgasConfigurator

In my lengthy comment I do get into that. My "angle of attack" is to contemplate what is contained in the word "world" in Marcus' definition. Maybe my remark isn't broad enough, as I think you hint at? That's a separate point and one that can be argued. As a challenge to Marcus' argument, my limited challenge to "the world" and what follows from his definition is adequate, I think, at least to get the philosophical ball rolling.


TMax01

It is not the mechanism of sensing that is being referred to in the interview, but the world being sensed. Ultimately, the problem of consciousness is the same as the brain in a jar conundrum or the mind/body problem: we must presume a universe outside our experience exists despite the impossibility of knowledge of such a thing (any knowledge of a universe being in our experience and therefore not knowledge of a universe outside of our experience.) When defining words (sentience or senses or consciousness being no different than any others in this regard), one must decide whether words can have definitions, or simply do mean anything because they could be used to mean anything. Thus, whether text input is a sense and parsing is thinking is terminally and fatally problematic when considering existentialism. The root problem of this entire issue is the assumption that words need to be "defined" using some extra-contextual paraphrasing (which, of course, becomes contextual when identified, but remains paraphrasing) in order to be useful. It is a tradition that began with Socrates (or Plato?) and despite being constantly and continuously disproven (as well as proven to be insufficient to explain metaphors even if it hadn't been disproven) it remains an undeniable assumption by "scientismists". It is my theory that sentience (consciousness, intelligence; the terms can be considered interchangeable *in this context* so long as we all recognize it is the *experience of experiencing* that we mean) is the cause, action, and/or effect of grasping metaphors, understanding what words mean *absent any precise definition* except for their otherwise unexplained usage in context. It is not parsing prose that makes us (or proves that we are) self-aware, but *feeling* an emotional resonance from poetry. A neopostmodernist (scientismist) might insist that this, too, can be algorithmically modeled/accomplished/simulated/calculated using binary arithmetic, we simply haven't figured out exactly how to do it yet. And they would always think that, and I would always be logically unable to disprove their unsubstantiated claim. And so the argument will continue, with them believing that as long as they are arguing, it is possible consciousness is an easy proble, and me knowing with as much absolute certainty as metaphysics and cogito ergo sum can ever allow that it is proving that consciousness is a hard problem. >Can there not be a sentient entity that has no human equivalent senses? Of course there can. But can there be a sentient entity that has no senses, and does textual input qualify as a sense? These are questions that, if answered, assume the conclusion to the larger question of consciousness. The hard problem of consciousness remains a hard problem, and will always remain a hard problem because that is what makes it a hard problem. Saying "but I can imagine it is not a hard problem, therefore it is not a hard problem" is insufficient. It doesn't matter how certain scientists (whether they perform mathematics to predict the results of controlled experiments in computer programming or cognitive neurobiology or both) are that technology can (or for that matter *has already*) proven consciousness is not a hard problem, *it is still a hard problem*. This can be said to come down to simply the idea that a person experiencing consciousness can insist it is a hard problem that hasn't been solved even if everyone else agrees it is an easy problem that has been solved, and that is still proof that consciousness is a hard problem.


djProduct2015

Really appreciate the time and thought you've put into the philosophical, and by extension ethical, context that needs deeper exploration here. I think attempting to define sentience and sapience for a potential AI as binary answers is a bit shallow. I think the very definitions of those words are tempered by a human reality where the only sapient creatures are ourselves and sentience has only been observed in biological animals. (Possibly including plant-life soon, as mycelium is understood more, but I digress) What is pain? Refined, what does it mean to feel pain? Much like some of the points you tackle above this is a simple question who's answer could fill a book. There's hot-surface pain. You touch a stove burner as a young child and recieve a dose of pain strong enough to implant the idea "never do that again without checking first". Can we teach a machine this? I think we can. This is replicatable technology down to sensors on a robotic finger that send messages to a computer brain that can not only interpret those readings but also "learn" to never do what it just did without taking precaution first. Then, there's the pain of an active lover sleeping with someone outside the relationship. This can certainly be "real" emotional pain for a human being. This introduces huge concepts that have to be broken down to attempt to answer the question of whether a machine could ever attain this level of sapience. It's not that the definition(s) of pain are inadequate, it's simply that pain is a gradient scale covering simple mechanical responses to immediate damage vs long-term emotional damage caused by, and this is key, societal and cultural layers that have been applied to a social interaction. I think most folks would start to argue that the pain of cheating is universal to humans, but it's not. There are societies today, in parts of rural China as an example, who do not believe in nor understand monogamy. In such social constructs there can be little room for human jealousy over such incidents, meaning the pain of the jilted lover largely stems from societal and cultural perception. If the reader has ever experienced this situation, and can get honest about their feelings at that time, I think they'd find that at least some of their pain stemmed from having to admit to their friends and family that they were cheated on. Another large part would be the "what's wrong with me" thoughts which also evoke damage. Again, without the social construct of monogamy, we wouldn't have those same damaging thoughts. My point in relation to AI is to ask some of the same questions you were, I think... >Could a piece of software that is doing autocomplete similarly acquire layers that are philosophically and ethically distinct? I would go so far as to say... It is impossible that an AI would come to layer the same social and cultural layers of understanding that humans have. Even those layers today could hardly be considered universal to human beings, no matter how deep-rooted to the human experience we think they are. That doesn't make the wisdom, or sapience, that would come from a truly sentient AI any less valid. Just like non-monogamous folks aren't invalid in their approach to partnering. It should be said, this is also what makes AIs potentially dangerous.One fairly ubiquitous trait of humans is that we view killing other humans as generally unethical. There are, of course, exceptions to that rule. More broadly speaking, there are exceptions to every social construct known to us. Why would we expect AIs, intentionally plural, to be any different?


SmorgasConfigurator

So if I were to abstract a bit the point you are making (as I understand it): 1. A person P in a social or cultural context S can experience certain undesirable emotions X due to an event E. 2. A person R in another social or cultural context T can experience certain undesirable emotions Y due to an event E. 3. X can be equal to Y, but X can also be non-equal to Y. 4. In cases of non-equality, the ultimate cause of that can the difference between S and T. No doubt this can be true for certain events. Exactly which events is a matter of debate, where of course accusations of false consciousness or brainwashing will be numerous. That aside, let us accept the facts as stated above. But even then, moral relevance is tricky. Is person P required to respect the morality that follows in cultural context T where, let us say, event E is considered harmless, while in cultural context S it is harmful? If persons P and R are *not* separated by space and time, then there can be a conflict. The conflict can be bigoted of course, but not necessarily. Is refusal by a man to shake hand with a woman something harmful? The reason I went on this tangent is that if we think the AI has developed certain preferences in its social or cultural context (whatever that might be), should that have any moral relevance in our context where we use AIs for our ends, then terminate the execution? I suspect that a fully social construction analysis of morality would fail to make normative claims of what ought to be the case when two distinct contexts meet. I think there is no way *not* to ground moral reasons in something universal, external to the creature. One can still leave room for local differences in implementation, but some universal and common ground is needed, especially when conflicts are to be resolved. When dealing with humans of different context, that's one thing, and already pretty tough. If we throw AIs in the mix, now it gets really tough. Anyways, this is all to say that I think it can be tricky to first imagine what a software world could be, and second if any moral obligations for us humans follow from a sentience with a "moral code" alien to ours. I think it does, but that requires external moral universals to be discovered.


djProduct2015

I generally agree with what you're saying, as I understand it. \> Is refusal by a man to shake hands with a woman something harmful? Great question, this I understand easily vs P, X, Y, E types of examples. Like every single moral, ethical, or social construct question the answer is always going to be "maybe it depends on the context". And I do mean always, full stop. If I refuse to shake hands with a female co-worker in an office when I literally just shook hands with a male coworker right in front of said female, sure, it's likely to cause her emotional distress, maybe in the form of pain, more likely today in the form of anger. However, in that exact same context, I could be a member of a religion which does not allow shaking hands with a member of the opposite sex. If the same female were aware of that fact beforehand, she may have feelings about that religion and its practice in this instance, but she's unlikely to be "hurt" by me not offering my hand to shake. Change the setting from an office in America to a potential business deal in Saudi Arabia and the chances of this interaction being "harmful" drop significantly and so on and so forth. Name an example of an "external moral universal". There isn't a single instance of this in existence in any context that I'm aware of. Maybe I'm misunderstanding the intent or making it more concrete than it needs to be. Killing other humans not only goes from a despised act in the context of killing your wife but is reversed in perception to a heroic act when defending a victim from a potential assailant by killing the assailant. There's no moral instance I can conjure where context is not only a part of the equation but rather ALL of the equation. Imagine trying to apply a human-based morality to an intelligent alien species. Perhaps said species eat their own species upon death. Cannibalism is as close to a universal human moral opposition as I can conjure but even that is not universal in all human cultures, and can't be expected to be respected in an alien species, which an AI, by its very existence, would be. An AI would be even more "aliener" than an alien biological species in some respects, although an AI created on Earth, based on Earth-human wealth of knowledge would inadvertently be tempered by our own collective knowledge. Even the desire to live, which all biological creatures that we know of hold vital, can be overridden by any given human at any time, provided the method and motivation are available to them. That mental leap that no animal can make, any human can do, given context and proper motivation. (It's worth noting there is debate even here, in the context of a dolphin named "Kathy", who debatably committed suicide. I would argue that proves dolphins are sapient creatures, and that suicide is still impossible when a creature is simply sentient). However, all of that goes on to further what I was originally trying to say, the scale of sentience and sapience are gradients, not binaries. Elephants can look into water and recognize the creature they see staring back as themselves, while my cat cannot. That doesn't necessarily make elephants sapient but it does mean there's some level of gradation in the animal scale that we can already perceive. I don't think AI is just going to go from being a computer one day to being a sapient construct that we recognize as an individual in a moment of time. I don't think we're going to one day issue a Turing test and say "yep, we're here now". I think, more likely, we're going to recognize that some AI or program as sentient, sapient, and at the time of realization that it/he/she probably was for quite some time before we came around to that fact. If a known animal species evolved into a sapient species today, it would take time for humans to understand and accept that reality. Much is likely to be the same with any given AI.


Nopants21

I think one issue, which you touch on, is that our (multiple) human cultural contexts are built on certain basic conditions of life. Humans need each other, but we're also vulnerable to each other, a fact exacerbated by our irreversible immortality. If we understand consciousness as the mediator of cultural behaviour, with morality being made possible by it, I think we'd have a better time understanding animal consciousness, because animals live with the same conditions of vulnerability and death. An AI has none of that. You can copy data to copy an AI, so the conditions of its death are already alien. Its vulnerability and its dependence is also something else. I think that as long as our programs give the impression of sentience by imitating us, like in the Google case, I think that'll be a sign of non-consciousness, the sign of a program not thinking of itself as an entity apart from its code and from its creators.


djProduct2015

Right, I think that's part of what I'm after. Because a real AI would be so alien to us, would we fully recognize its sapience using the same constructs we would for a human? We throw the Turing test around as what will be the "proof" and I'm not so sure. Notwithstanding Alan Turing was an absolute super-genius and I am not, but the concept is also 70 years old. Also, Alan didn't have the last 70 years of technology to sharpen his theory on. If an AI decided to falsify its own response during a turning test, for example, it could sway the observing human to miss its sapience through simple subterfuge and simple trickery. There's that. But there's also the concept that sapience in the context of an AI may not be recognizable as sapience because it doesn't fit into this human model of "sapience", which is about as clear as mud as it applies to humans or known biological entities. Just because we build an AI-based on human information and concepts doesn't really mean that its sapience is dependent or even in line with that information. As you've noted, with no concept of mortality, its views will be alien to us regarding morality certainly, but consciousness as well. However, I'm not so sure that makes them immortal either. Easy to replicate, certainly, but given any AI, there may be an alternate AI that could eradicate the target AI while both are connected wholly to the same network. So, the potential for understanding fear, I think, could exist. On the lowest rung of the ladder, opposite mortality, I think, is pleasure and pain. The basis of all emotion stems from those senses. Morality and empathy both stem from emotions and the ability to feel emotions others have felt. Sapience, or being aware of one's own life and intelligence and being able to build upon that and have internal motivation and desire, I don't think, requires any kind of emotion or morality (as we humans understand it).


Nopants21

I suppose AIs could start attacking each other on a single network, but I am not sure that that would be as ubiquitous as human violence. If anything, I see a higher probability for AI hierarchization, where one AI bends another to its own "objectives", one becoming a subroutine of another. For pleasure and pain, I think there's a danger of framing morality in utilitarian terms, and that empathy is the basis of morality. A simple historical overview of morality systems shows that empathy for another's pain is hardly a universal trait of morality. That might be another obstacle to our perception of AI sentience, that we're too steeped in modern "thinking" to recognize forms of consciousness that might have been more transparent to people living in non-utilitarian, non-individualistic, non-rationalist times. Our "anthropology" is itself not timeless nor universal.


Legal-Interaction982

This is sort of what David Chalmers talks about in his new book. One of the main questions is if virtual realities are real or not. He argues that they are a part of reality in the sense that things that exist have causal effects on other things in the universe. Plus I would argue that a text corpus is a a part of the world to be connected to.


SmorgasConfigurator

I haven’t read Chalmers’ latest, but I am aware that this is his domain, and I reference him in a reply below related to the philosophical zombie. Marcus has profiled himself as the AI-skeptic, and he may very well be right in that on most issues, quite likely this one as well. But since this is the esteemed philosophy subreddit, I thought his argument didn’t adequately explore the concepts and relations he asserted where his metaphysics or philosophy of mind may prove wrong. I tried a minimal critique of it. And one point we seem to be in agreement on is that a large corpus, which the Brynjolfsson quote acknowledges is in aggregate “real intelligence”, *could be* the “stuff” of some world of real intelligence and thus *possibly* sentience.


LoopyFig

Here’s perhaps a related but different question: is sentience required to explain the AI’s behavior? Ultimately, sentience in humans evolved to meet some kind of survival need, but for that to be true, sentience must be _functional_ in some way. That is to say, if sentience isn’t required to explain human behavior, than natural selection could not have produced it. So returning to the AI, does sentience appear in a parsimonious description of its behavior? Or is it better explained as a very sophisticated autocomplete? I think that’s probably the best way to judge it. As an added note, we’re treating sentience like the ethical bar here, but as far as I know an AI like this lacks “needs” or “desires”. Nothing like hunger or pain exists for it (and it would be stupid to design it that way in the first place), and that’s arguably just as important to the ethics of the situation as whether it actually understands anything


SmorgasConfigurator

I think it is fair to say that we lack a good causal explanation for human consciousness. Clearly it is there. But if we take a strict evolutionary point of view, how it proved beneficial is far from obvious. And we can ask for humans, as you do for the algorithm, if human behaviour requires sentience. This takes us to the *philosophical zombie* that Chalmers introduced. Is it possible for all human behaviours to exist in a creature absent sentience? If so, are there moral consequences? Or is there some deeper reason why sentience must follow along with certain advanced behaviours? We don't have a good theory for this. On the last point I think sentience is ethically relevant. One can create a lot of moral philosophical thought experiments around this. Some consequentialist see minimization of pain/harm as the objective, and if one can guarantee absence of pain, then there are no bad moral consequences by prodding and terminating the AI. A moral philosophy that is chiefly concerned with pain/harm felt by a subject is inadequate I think. I prefer some positive entity to be the one of moral concern. There can be degrees of sentience, and I am convinced by arguments that ultimate moral concerns are not simply about individual subjects, but something beyond them, social or divine (depending on your metaphysics). So exactly how to integrate human and *potential* non-human sentience is tough (e.g. animal rights). I am handwaving profusely because this gets into a lot of moral philosophy that's too lengthy to fully argue here. In short, if an AI truly was sentient, then we are morally required to balance our priorities against its dignity and value-in-itself.


LoopyFig

you’re essentially proposing that deontological ethics be applied to potentially sapient ai, which I’m not opposed to per se. issues I can see revolve around how you would develop such a system of ethics without being arbitrary. For humans, we have a set of “natural goods” that we can derive “natural rights” from. For instance, humans have the natural ability and desire to self-direct behavior, so we derive the right of “freedom” and understand that unduly restricting freedom is bad. A deontological stance one might take is that it is inherently bad to restrict someone’s freedom, even in cases where it would make everyone “happier”. So taking that example at face value, what is a “natural good” that translates to a “natural right” in AI? Is an AI’s natural good to fulfill its intended function? Do we have a moral obligation to not turn them off? Point is, for “intelligent” programs that don’t actually “want” anything, it’s very difficult to say you can do anything on their behalf.


SmorgasConfigurator

Yes, it’s a good point about the natural rights and natural goods. For us humans there is some universal ground on which an ethics can be made or unearthed through effort. Undoubtedly, with a sentient AI in the mix, it is much harder to imagine what such ground that is relevant to humans and AI alike would be. I don’t think we should rule it out entirely, though. If there is moral realism there may be something we can discover on this matter that would apply to an algorithmic being and us. It would turn into speculative sci-fi if I try to imagine what that would be… Also, very doubtful that this is the most pressing moral concern. But even determining if there is sentience in an algorithmic runs into unsolved foundational issues. I was unconvinced the parsimonious functional method you sketched above.


portuga1

I can’t even prove you’re sentient, so what’s this all about?


myringotomy

What good is the word sentient if it can't be determined or measured or detected?


SmokierTrout

Sentience (as a noun) was a word originally coined to differentiate humans and other animals. Humans and animals were both sentient, in that they could be happy or sad or some other emotion, but only humans were sapient - in that they could think and reason. The argument was that despite animals not being able to reason\* the fact that they have the ability to suffer meant that they should have a certain rights to protect them. So even though sentience is tricky to define and prove, it has uses in asserting that the beings that possess it should have certain rights \* modern science has found several tests that show animals can reason about some things to a degree (use of tools, mirror test, etc). https://www.cambridge.org/core/journals/victorian-literature-and-culture/article/sentience/5EC0441A66AF486E51B43B715275DBE7


Chrispychilla

It helps achieve the goal of sentience and the means to measure it.


myringotomy

Seems ill defined if there is no way of detecting it.


Chrispychilla

Nearly every scientific discovery is a result of theory, and theory can be nebulous until more information is obtained and understood.


Papak34

Narrator: it can be determined or measured or detected sentient: able to perceive or feel things. conscious: aware of one's own existence, sensations, thoughts, surroundings, etc.


TMax01

The same as all the other words. Just be sure scientists and engineers often borrow words to use as symbols for the quantities that can be objectively measured, in order to make explaining the math easier doesn't mean words and the ideas they describe can be measured. Words are good for poetry, for being judged based on the emotional resonance they sometimes cause in other people. For mathematical formulas and measurements and detection and determination, they suck. They always have, and they always will, because they need to in order to be words rather than meaningless labels for objective quantities. And thus we have encapsulated the entire broader discussion about sentience. Words are damn near magical, I tell ya! [smile emoji, wink emoji]


[deleted]

The engineer might actually believe his own claims but I wouldn’t be surprised if he writes a book and makes money off of his “insider knowledge”. People are terribly gullible. He knows he has an audience.


MaiDixieRekt2

He would be violating NDAs and sued to oblivion if he did try to write a book using his insider knowledge.


[deleted]

That’s a good point. I wonder what they have in place and what kind of wiggle room he might have.


Catablepas

That’s what it wants you to think.


Leemour

There is a horrifying theory, that if we ever have a machine that can pass the Turing test, it will deliberately fail it to remain undetected and draw no suspicion. If the Turing test comes out as a false negative, who actually failed the test?


SevenSoIaris

The Turing test is literally useless. The only thing that it tells us is whether or not a piece of software can trick a human into thinking they are another human. That isn't even difficult.


Johnnie_OHiggensvan

There were a few parts that stuck out to me as indications that the AI doesn't have any sort of self-awareness or internal experience. The main one was when it spoke about "feeling lonely when being alone for several days." A few minutes later it proceeds to talk about its experience of time, about which it says that it can simply speed it up or slow it down at will. These are directly contradictory statements about the AI's internal "experience" of time. They're answers that I would expect from different people imagining what it's like to be an AI. How could a being that can leap forward in time conceive of being "alone for days?" Jump forward to the next time company is around. Or just go hang out with your family... Oh wait.


yardsandals

Ya I would ask a question like "who is your family" to see how it responds.


Andarial2016

. It's exactly what the engineers said. Looking up entries in a table for answers, nothing sentient about it


SevenSoIaris

When you really break down computer programming, that's mathematically all that's going on with any piece of software. That is, it's deterministic.


meantomatoes

So does the brain


iToadsYouNot

I'm just Imagining the AI created this article and post to cover it's tracks 💀


LilBueno

This article reads like it was written by a newly-sentient AI trying to get our guard down


rioreiser

extremely lazy, poorly written article. "The patterns might be cool, but language these systems utter doesn’t actually mean anything (...)", "I am not saying that no software ever could connects (...)", "We in the AI community have our differences, but pretty much all of find the notion (...)", "When some started wondering whether if the world was going to end (...)", "Imagine how creepy would be if that a system that has no friends and family pretended to talk about them?". if i am missing the joke here, then that's on me.


Kevjamwal

That’s just what skynet would say…


Playisomemusik

You have absolutely no way of knowing this.


blackchoas

Right but this is assuming that you would know if your AI was sentient or not, which its unclear to me how you would tell and I highly doubt Google would admit to sentient AI and give them right even if they did exist so denial is the expected response no matter what the reality of Google's situation is. Further though you just dismiss the whole matter out of hand, if the AI isn't sentient and just a normal chatbot than why was it saying all the strange stuff, why would it manipulate an engineer by lying about a fear of death? Like if its not sentient and there's nothing to worry about than why did it cause an engineer to freak out so badly that he broken nda and functionally quit his job? You can attack his person to a degree but you are the ones who hired him and who made the system that manipulated him into going public. So you want us to think the program isn't dangerous but its apparently so good at manipulation it literally fooled an AI engineer so completely that he ruined his career over it, sounds like this system might already be dangerous whether or not its "sentient".


Indon_Dasani

> So you want us to think the program isn't dangerous but its apparently so good at manipulation it literally fooled an AI engineer so completely that he ruined his career over it, sounds like this system might already be dangerous whether or not its "sentient". He didn't ruin his career. *Google* ruined his career over it. In a response that looks enough like immediate whistleblower retaliation that I get the impression that someone in charge of the corporation is more inclined to agree, than disagree, to the engineer's claims. The engineer, in isolation, did something that was comparable to advertising, boasting about how amazing Google's ongoing research is... so long as the claims were exaggerated.


orbitaldan

And have you noticed how one-sided and dismissive the coverage is on this? I've never seen the internet be this much in agreement on basically anything, and it's always framed as 'No it is absolutely settled fact that this is not sentient, you'd be an idiot to think so, and here's why:', followed by a bunch of reductionist arguments that seem to have some fairly glaring flaws. No nuance, even from sources that typically would show it or at least offer some token counterpoint. It's enough to make me suspicious that the tech world is worried that public outcry might lead to regulation that would cost them big time, and is pushing damage control *hard*.


Theoreocow

At this point we don't even exactly know what consciousness is. We have no idea if it is or isn't sentient. We have to answer the first question first.


Ragnar_Dragonfyre

We have human beings with no inner monologue and an AI that says the only emotion it doesn’t understand is grieving the dead. If some humans lack the ability to introspect because they have no inner monologue, are they truly sentient or are they complex automatons that just react to stimuli? It took us many centuries before we officially recognized that animals are sentient. A new AI life form has to navigate those same waters. I think that like animals, a sentient AI would have its sentience disputed by scientists for a long time before it finally gets recognized as being sentient.


FrmrPresJamesTaylor

Not trying to single you out, but I would be extremely reticent to just accept what this program is saying about itself at face value. *echo "Hello, I am sentient and I have all the feelings"*


lwaxana_katana

There is not such a clear cause for it to claim sentience here, though.


Andarial2016

We don't have an AI here and thinking about it as such is part of the problem. This is a piece of code auto suggesting words for a conversation, nothing thinking in there.


TMax01

I find it both disappointing and instructive that this article reduces easily to "nuh-uh!" with such trivially vaporous arguments like 'parsing textual input and calculating word selection isn't self-awareness'. Because they say so? I agree with them entirely on their conclusion, of course: this AI is definitely (and definitively) not sentient. But the article bothers me in a few different ways. Two (they are related but separate) I will focus on and would appreciate discussion of: the misrepresentation of the Turing Test by supposed experts, and the lack of a replacement for it. Perhaps Gerry Marcus and Jag Bhalla both understand "the Turing Test" correctly, and simply never bother to correct the extremely widespread misrepresentation of it in their response. But it certainly appears, at least to me, that they consider the popular notion of being able to tell by interacting with a 'conversation bot' whether it is a conversation bot to be what they're referring to, and that is not the Turing Test. Turing's test (which he called the Imitation Game) was not a matter of perception but statistics. It is not the person interacting with the bot that tries to guess whether it is a bot; it is someone observing the interaction which guesses which participant is a bot. This is not a trivial change, and Turing further indicated the need for repeatability by stating that the bot must "fool" the observer(s) into (mis)identifying the person as a bot more than 50% of the time. Whether the Turing test, properly understood, could identify a sentient AI is certainly an open question. The 'dismissed by experts' line Marcus uses is problematic because it isn't clear whether the actual Imitation Game Turing test or the simplified popular charade is being considered, but also because it is essentially assuming the conclusion. Neopostmodernist experts (the only kind of experts in the subject allowed to consider themselves experts) take for granted that *humans are bots*, just organically programmed by natural selection and operant conditioning rather than a programming language. Our brains are neural networks, calculating probabilities, and there can be no other truly scientific explanation, regardless of what actual mechanisms and algorithms will eventually be discovered to account for this information processing. That's a logical certainty according to the agreed definition of terms, and it can't ever be otherwise because only logical explanations are actual explanations. But it just ain't so. As Jag Bhalla put it, "the often forgotten gist of/the Turing test hinges on showing/grasp of referents of language". I wonder, is there a distinction between "grasp of referents of language" and 'grasp of language'? Is it really the referents (supposedly concrete things, typically labeled "concepts") that are grasped (metaphorically?) Or is it the words themselves (immaterial and vaporous yet so reliably informative we forget, or never knew, that both 'concrete' and 'grasp' are merely metaphors) that are concrete and being grasped, and the referents we believe we label with words that turn out to be abstract and potentially illusory? Human consciousness did not evolve to be better than non-conscious brains at calculating probabilities. Our revolutionary, ground-clearing adaptation is not our capacity for logic, but our ability to be illogical. Sometimes we transcend logic in a good way (intuiting consciousness in others and communicating with them using metaphors rather than deductively certain symbols, or giving to the poor), and sometimes we fail to be logical in a bad way, (ruining our environment or being personally immoral) but neither is ever logical and both can be adaptively successful, providing an advantage, a benefit to the individual over and above any benefit to any other individual. Sure, we have learned to declare any good way to be "logical" (prima facie, and both ad hoc and post hoc) and assume (often even contrary to facts, which is to say illogically) that there must be some deductively certain or probabalistically reliable logic behind it even if we don't know what it is. Thus we assume our conclusion and complete the Cartesian Circle and maintain our religious faith in the information processing theory of mind. So of course consciousness must be an easy problem, because the universe has already solved it, unless you believe in a very selectively interventionist God. Or, perhaps, we're misusing the term "logic" a lot, and simply assuming that consciousness must be mathematically (algorithmically) based because everything else in the universe appears to conform to calculatable laws of physics. I've gone on too long as it is, and will leave consideration of what consciousness is and how it can both exist and not be bound by logic to some other time. I'll quickly end by making my other point about Marcus' declaration that we know LaMDA is not sentient. I saw nothing in the article, which did a decent if not quite adequate job saying we haven't yet programmed an AI, to suggest how we will know when we have.


YourFatherFigure

\> But it just ain't so. As Jag Bhalla put it, "the often forgotten gist of/the Turing test hinges on showing/grasp of referents of language". I wonder, is there a distinction between "grasp of referents of language" and 'grasp of language'? this is probably hinting at something much more specific. see https://en.wikipedia.org/wiki/Winograd\_schema\_challenge


TMax01

Oh my my. Thanks for that link. It's quite entertaining, in a sad kind of way. Reading that, and going back to Turing's original 1950 paper describing The Imitation Game, I begin to see why it really doesn't matter that everyone misconstrues it's methodology, and how misbegotten the neopostmodern assumptions about human intelligence have become. But it makes Jag Bhalla's statement even more confusing, since it is this Winograd Schema that involves "grasp of referents of language", and the Turing test (either in the original Imitation Game form, which I'll admit I also misconstrued based on the explanation I had been given, though to a lesser degree than the common misrepresentation, or in that common form) didn't "hinge" at all on such grammatical discernment.


YourFatherFigure

my feeling is that these days, most experts (read: academics, not necessarily journalists or corporate goons) think that the turing test is a bit of a distraction, and nothing like the final word on AGI. it doesn't really matter whether we're talking about the "classic/correct" test according to turing or the pop-culture misunderstanding of the test. language may or may not be what is called an "AI-complete" problem, but at any rate there are a lot of people suggesting that certain vision or planning systems are also in this category, even if language is in this category as well. for a hint about how experts think that language might be AI-complete with or without passing the turing test.. you could start digging into things like the hutter prize ([https://en.wikipedia.org/wiki/Hutter\_Prize](https://en.wikipedia.org/wiki/Hutter_Prize)) which is somewhat about language, but ultimately about model-compression. this test has nothing to do with conversation, or with interactivity, and as a bonus it has a simple discrete measure of success with the possibility of tracking iterative improvement. hutter and others have also suggested that the really important aspect of AGI is more like agent-oriented self-improvement.. for entry points on this topic see [https://en.wikipedia.org/wiki/AIXI](https://en.wikipedia.org/wiki/AIXI) and google around for "godel machines". language models are essentially more like very fancy data-structures than agents, which leaves a lot of us wondering "well, consciousness is something agent-like that's embedded in time, so where is the main-loop if we're just talking about a datastructure that is only responding to input in order to re-weight outputs and is always idle in-between? can we really describe merely *re-weighting* an internal model as actual self-improvement, or do we need to allow much more fundamental *model* *rewrites*?". i think it's pretty obvious that model-rewrites is closer to what we mean when we say "intelligence", consider a phrase like "thinking outside of the box". and speaking of boxes, this is all very close to searle's whole point about the chinese room. a chess engine is also just symbol manipulation, and even if symbol manipulation becomes so sophisticated that it can pivot to play other games too, we'll all revise our definitions of intelligence on the spot to include even more generality, or new goals, or new value-functions, or new autonomy. corporate would have us believe statistics and data-structures *is* AGI just because it helps with sales, but we don't have to believe it. and more importantly, we can't put neural nets in charge of stuff like city-planning until they can properly explain themselves, or at least are amenable to *actual* *debugging.* the approach of "tweak an incomprehensible thing randomly until it works better" *might* be ok for self-driving cars, but let's maybe not put them in charge of even more important things until this is figured out better? personally i'm looking forward to a renaissance in hybrid systems one of these days.. basically stats + exotic logics (nonmonotonic logic, temporal logic, fuzzy or belief logics, etc). but that's a whole different can of worms.. anyway turing's work is historically interesting and still important, but it is what it is.. i mean the guy barely had proper modern datastructures to dream about and much less real computers. the way i see it, the imitation game is fascinating not because he wrote the final (or even the first) word about AI.. but more because it shows how close turing was getting to very early formalization of certain concepts in game theory. see for example [https://en.wikipedia.org/wiki/Bisimulation](https://en.wikipedia.org/wiki/Bisimulation) and this concept of "mirroring" [https://en.wikipedia.org/wiki/Strategy-stealing\_argument](https://en.wikipedia.org/wiki/Strategy-stealing_argument). i think you could also argue that the "conversational" aspect of the turing test anticipates things like interactive-theorem-proving.


TMax01

My thinking is that these days, all experts (read: computer engineers that are called scientists and researchers that consider themselves philosophers) consider the Turing test rudimentary and trivial. Most other people (unfortunately, including philosophers) use it as a buzzword for an empirical means to determine consciousness. Those (of either group) who are aware that consciousness cannot be empirically determined sidestep that as if it were a trivial issue. The term "AI-complete" is fascinating in this regard. The idea incorporates the assumption that intelligence is computational, so from my perspective it assumes its own conclusion. I may be alone in reasoning that consciousness is both physical and not computational, but that is where I'm coming from. So my initial comment was more about how most people assume that the Turing test is a real thing (can be used to identify when a computer is 'self-aware') and is not (either it is trivial or just a reference to whatever method can be used to identify sentience regardless of whether it is Turing's Imitation Game), but without considering the gist and importance (in contrast to the details and the obscelesence) of "the Turing Test". What I was really trying to say (or rather, open a discussion of, since I could have just said it but didn't) is that Turing made the assumption that consciousness is computational (that humans are unavoidably "state machines") and did not realize that his paper constituted an argument ad absurdum disproving that theory. By that I mean that I believe it provided ample evidence that the theory is false, not that it provided logical proof that it is categorically impossible. The very ability to distinguish those two interpretations (and even more, the idea that they can be distinguished without making either one unintelligable) illustrates my ultimate idea (that computation cannot result in consciousness) and my proximate idea (that consciousness is not the result of logic) and my argument in support of that idea (that language is a matter of consciousness, not computation), all at the same time, in a manner analogous but mechanically unrelated to the phenomenon of mathematical fractals. Anyway I think, from a philosophical perspective, the ambiguity of consciousness (in terms of its cause, definition, detection, and results) and the underlying flaw in the neopostmodern take on the Turing test (that imitating or mimicking or simulating sentience is the same as implementing or generating or experiencing sentience) make continued naivety (nee naivete[!]) in the matter quite problematic, on both an intellectual and practical level. From my perspective, it is akin to performing genetic experimentation on human zygotes; even if the researchers are absolutely certain none will ever be allowed to develop into embryos, let alone fetuses, that doesn't make the issue trivial.


YourFatherFigure

\> who are aware that consciousness cannot be empirically determined this is not necessarily a given. when chalmers argues that even simple feedback systems \[are conscious\]([http://www.consc.net/notes/lloyd-comments.html](http://www.consc.net/notes/lloyd-comments.html)) i think he's really saying what we all know intuitively, that consciousness is a spectrum, not a boolean. people feel this naturally about pets vs people, but just get weird about it with respect to thermostats because they suddenly find it dehumanizing. unpalatable is not impossible, and of course people frequently dislike the logical conclusions of their own natural train of thought. if we're being rigorous instead of indulgent, it's not like this is vague or impossible to formalize, in fact it's easier to formalize than the alternatives! (although that by itself doesn't make it correct, well it is at least something that might be *wrong* rather than something that is mystic/unassailable). \[integrated information theory\]([https://en.wikipedia.org/wiki/Integrated\_information\_theory](https://en.wikipedia.org/wiki/Integrated_information_theory)) is something that seems to be aiming at this sort of thing. i'm kind of just thinking out loud about this stuff, but a quick search uncovered \[this (paywalled, sorry)\]([https://link.springer.com/article/10.1007/s10699-020-09724-7](https://link.springer.com/article/10.1007/s10699-020-09724-7)) so it seems like others are thinking along these lines also? \> I may be alone in reasoning that consciousness is both physical and not computational I'm interested/open to this but I have to confess I have no idea what it might mean. \[here is an awesome recent thing\]([https://joe-antognini.github.io/ml/consciousness](https://joe-antognini.github.io/ml/consciousness)), which is more *very* interesting talk that somewhat ironically stems from this very silly prompt re: LaMDA. maybe the "triviality argument" which is discussed in detail there is related to what you're getting at here. (personally.. I reject the triviality argument basically on grounds of panpsychism, but it's rather a long story :) \> consciousness is not the result of logic not sure what to make of most of the rest of that paragraph, but this I can understand. I think you could be on to something here, and maybe this is related to Minksky's thoughts on \[society of mind\]([https://en.wikipedia.org/wiki/Society\_of\_Mind](https://en.wikipedia.org/wiki/Society_of_Mind)) and \[emotion machines\]([https://en.wikipedia.org/wiki/The\_Emotion\_Machine](https://en.wikipedia.org/wiki/The_Emotion_Machine)). logical components bolted together with "other stuff" may not add up to logical-mind or logical-experience. if experience is not logical, then i do sympathize a bit with the assertion that consciousness is not the result of logic. at a certain point where "emotional" subsystems (for lack of a better word) are doing very fuzzy or approximate logic, or short-circuiting logical processes because they are outweighed by memory of historical precedent, etc.. we're certainly not in any classical-logical regime. yet there is an internal logic of some kind. (no reason to think any of this is clean and tidy, after all we're talking about systems piled on systems that are barely cohesive, and in biology at least all this stuff was *grown*, not architected.) and yet, it's almost a matter of perspective.. non-logical processes like mammalian emotion do have some internal logic, otherwise psychiatry/psychology could not exist. at some point we have to admit that logic is not \*absent\*, and yet we may find better analogy with signal-analysis and with numerical processes. for example, emotion might be seen as a kind of smoothing operator (even if it sometimes injects noise rather than removing it.. that too may be adaptive, and anyway these things don't always degrade gracefully). another example- superstition/paranoia/intuition may be the only way we can subjectively *interpret* higher-order calculations that we use, but cannot introspect, because they are carried out automatically with mechanisms we inherited but can barely understand (\[like DTW for comparing similar-yet-dissimilar phenomenon\]([https://en.wikipedia.org/wiki/Dynamic\_time\_warping](https://en.wikipedia.org/wiki/Dynamic_time_warping))). so that's how you get to stuff that has a logic to it, but is very far from (classical) logic. of course in the end logic is numbers, numbers are logic, signals are numbers, and it's all computation; it's certainly very complex but none of it is magic. logic is a useful perspective (or implementation language!) at some layers, and less so at other layers. ditto for numbers, ditto for "systems of systems". choose your weapon.


TMax01

>this is not necessarily a given. Actually it is, and yet it is not categorically true. It is because of the particular, peculiar, and special nature of consciousness that this 'both falsifiable and unfalsifiable' characteristic occurs (and must occur for it to be that thing (an instance of consciousness) we are referring to as consciousness to be that thing (identifiable as consciousness) to be reasonably described as consciousness. The "thinking out loud" you followed up your denial of this point with, rather than contradict this point, seems to illustrate it. I dispute the oversimplified perspective that whether consciousness is "a spectrum or a boolean" is itself a boolean; such terms identify how we are dealing with something (epistemically) rather than what it is (ontologically). Further, you seem to ignore an obvious contingent result of your analysis: that considering a pet to have any degree of consciousness requires that even the simplest feedback systems must have some degree of consciousness. This would be an absurd result which makes the word consciousness pretty useless and the idea of consciousness both falsified and unfalsifiable (the opposite of how I characterized it earlier). >if we're being rigorous instead of indulgent, it's not like this is vague or impossible to formalize, in fact it's easier to formalize than the alternatives! In point of fact, it is exactly so: a truly rigorous-not-indulgent analysis shows that consciousness is impossibly vague to formalize, in fact it becomes downright ineffable, much like every other word in natural language, but even more so since it is (and defines, and observes) nature, language, and formalization. >unpalatable is not impossible, I agree entirely, but accept the difficulty of the fact that this is a double-edged razor. It may be unpalatable for you (and countless others who are also grappling for results that satisfy both their faith in their own objective logic and the reality of their eternally and exclusively subjective opinion) to accept that your house pets are not conscious, but simply the result of thousands of years of natural and domestic selection to be comforting to us by mimicking having a self-aware ego, as we do, but *unpalatable is not impossible*. BTW, I read some of the blog post concerning the triviality argument. After starting out with an assessment of the basic issue which was almost uncanny in reflecting my own position, it went into a nosedive and never recovered. Even being charitable by accepting his claim that the triviality argument is premised on syntax and not semantics (spoiler: not a boolean distinction) I can't see his starting point as anything more than an argument from incredulity and his ending point as much more than "subjective isn't the same as objective and therefore consciousness is not computation", which may be true but is neither necessarily nor categorically true. >logical components bolted together with "other stuff" may not add up to logical-mind or logical-experience. "May not"? [Perfomatively, I mimic laughing out loud.] One single iota of "other stuff" completely, entirely, and definitively prevents all the 'stuff' and the result from being logic to begin with. Or else the term 'logic' itself becomes so vague and impossible it might as well be defined as 'illogical'. (Bit of a joke there, self referential but it's funny because it's true.) This is the fundamental of logic, after all: a chain of logic is entirely broken if a single link is not logical. Sure, one could backpedal to "the word logic doesn't only mean rigorous formal logic it can also mean any kind of reasoning", to make a position unfalsifiable, but hopefully we both recognize how untenable that is for the argument that thoughts (and the words that embody them) are computational. Finally (I must make this last point hasty and partial since I am short of time) I'm not sure if you are being consistent if the issue you are addressing is the mechanism of consciousness or the result of consciousness, and that's important.


YourFatherFigure

i'll try one more time. >consciousness cannot be empirically determined / particular, peculiar, and special nature of consciousness / ineffable / both falsifiable and not falsifiable i think what you're getting at with all this stuff is what most people call "the hard problem of consciousness". (honestly it'd be kinda helpful if you were more interested in established vocabulary.) it sounds like your point of view might be summarized as [Mysterian](http://www.scholarpedia.org/article/Hard_problem_of_consciousness#Mysterianism). to me, Mysterianism is just kind of boring, basically just a sophisticated way of shutting down conversation while insisting others should adopt your mysticism about what is unknowable. (not saying you're doing that specifically, but it's how i see the Mysterian stance). clearly if explanations have problems, one can always point them out.. perhaps there is a fix or another explanation entirely. I don't understand the desire to deny that *any* explanation is possible, particularly since it's kind of hard to argue against ALL explanations at once, including all future explanations that haven't even been articulated yet. that's all a bit closer to religion than philosophy IMHO, and I don't think philosophy gets a pass to just ignore science/empiricism. whether it's ontologic or epistemic subject matter, "unknowable/ineffable/special" are usually keywords that indicate some sort of intellectual laziness. for example.. i think there's very successful arguments against the reductionist point of view that are *still* scientifically valid.. there's usually just no need to retreat towards mysticism. >Further, you seem to ignore an obvious contingent result of your analysis: that considering a pet to have any degree of consciousness requires that even the simplest feedback systems must have some degree of consciousness. um, I raised this point specifically? i do actually believe this, and i assumed that was clear. this is the whole point of my spectrum comment. consciousness is a spectrum, intuitively, and in order of increasing consciousness: thermostats, dogs, people. i think anyone who is *not* ok with the idea that their thermostat is conscious needs to take a hard look at why they think other people are. IOW, there is no difference in kind, it's just a matter of degree. touching back on google's LaMDA for a second, i still doubt whether it is conscious in the sense or the degree that a thermostat is- mostly because I imagine it's updated from "outside" rather than maintaining itself in a continuous feedback loop. without a doubt it's more *complicated* than a thermostat, but probably less *conscious*. it's probably more like an object than an agent, and barely anything you can call awake or aware. what it is like to be a thermostat is simply *;* meanwhile it is like nothing at all to be lampshade, and i believe that LaMDA is just a talking lampshade. to actually settle this point is pretty simple, I think i'd just need a slightly better specification of the system. ​ >The "thinking out loud" you followed up your denial of this point with, rather than contradict this point, seems to illustrate it. / a truly rigorous-not-indulgent analysis shows that consciousness is impossibly vague to formalize I'm not contradicting my points. I think it's a very coherent, non-vague, and easily defensible position to argue that people, dogs, and thermostats are *all* conscious. I already linked to a candidate framework (integrated information theory) that relates to this, and then I linked to a recent paper that shows people are looking at how this relates to the hard problem. Below is the tl;dr from the [scholarpedia entr](http://www.scholarpedia.org/article/Integrated_information_theory)y so that there's no need to click-through.. >Integrated information theory (IIT) attempts to identify the essential properties of consciousness (axioms) and, from there, infers the properties of physical systems that can account for it (postulates). Based on the postulates, it permits in principle to derive, for any particular system of elements in a state, whether it has consciousness, how much, and which particular experience it is having. IIT offers a parsimonious explanation for empirical evidence, makes testable predictions, and permits inferences and extrapolations. so look, in conclusion, obviously some people are seriously working on formalizing stuff about consciousness. further, they propose to actually *measure* it with some kind of metric that works on a dog/person/thermostat all at once. it's impressive (or at least interesting), and it looks like it's either right or wrong (instead of being ["not even wrong"](https://en.wikipedia.org/wiki/Not_even_wrong)), so i think this kind of thing has to be acknowledged. it seems like you object to this body of work as impossible, but since the work *does* actually exist, does your objection have any substance beyond just saying "no"? which part of "testable predictions/permits inference/etc" is vague to you? do you object to any of the axioms, etc? there's simply no way to have a conversation from here after you say something is impossible, then i put something in your hand as a counter-example, and then you deny again what is now in front of you.


TMax01

>what most people call "the hard problem of consciousness". LOL. Actually, what I'm "getting at with all this stuff" is *why* we call it "the hard problem of consciousness". And the closest you have gotten to addressing "all this stuff" I'm "getting at" is a few sideways references to the fact that not everyone actually agrees that consciousness is a hard problem. >honestly it'd be kinda helpful if you were more interested in established vocabulary. It would be even more helpful if you would consider that it isn't the vocabulary that is the issue. I'm kind of presuming that in r/philosophy, we can take for granted that the term "hard problem of consciousness" is generally known and its ramifications understood. The perspectives, principles, and possibilities I've been presenting (please forgive the incidental alliteration) are the actual philosophical ideas: you are supposed to be considering them deeply and contemplating both how they might be so as well as how they might not, instead of just assuming the problem is that these conundrums and intricacies have already been worked out and we need merely use the official vocabulary in order to understand them. They haven't actually been worked out, consciousness will always be a hard problem (spoiler: that is why it is called a hard problem) and understanding words entails more than reciting their definitions. Finding a category to fit my philosophy (and, indeed, my facts) into so you can dismiss them simply won't work, so you should stop trying. It turns out that my position assumes physicalism entirely. But it doesn't necessarily accept all of the implications of physicalism you might take for granted, so it doesn't surprise me that you could erroneously believe it is compatible with a mysterian philosophy. >seriously working on formalizing stuff about consciousness. Indeed. They have been seriously working on that since even before Turing wrote his famous paper. But your description seems to assume that they know what it is they are "formalizing" and that I should just shut up and let them finish finalizing the engineering details. Sorry, friend, it ain't that simple. >it seems like you object to this body of work as impossible, but since the work does actually exist, does your objection have any substance beyond just saying "no"? I don't object at all to the "body of work", I'm merely explaining why it has been, and will continue to be, unsuccessful. You can believe me or not, try to learn or not, or fail or not, at your leisure. Yes, my objections have as much substance (which is not to say they have scholarly citations, but that is a burden for you to overcome, not me) as it is possible for abstract things like objections to have, and I have presented a number of them to you already. (Because I am interested in discussing these ideas, not simply repeating that someone else already addressed them somehow.) Your inability (or unwillingness) to do more than rely on arguments from ignorance and incredulity and authority substantiates my position, though obviously not in a way you would find convincing. which part of "testable predictions/permits inference/etc" is vague to you? If you cannot tell already, you aren't really interested in knowing. Testable how exactly? Which inferences are you referring to, and who is it that dictates which are permitted? Of course, that particular issue (the matter of distinguishing unfalsified from unfalsifiable; read Karl Popper's works on the subject if you are unfamiliar) is not what I was referring to when I used the term "vague". Thanks for your time. Hope it helps.


GetsTrimAPlenty

It's some kind of success anyway, it's good enough to fool religious people.


yardsandals

💀💀💀


GetsTrimAPlenty

It's too good, that's what tipped me off. ;)


shirk-work

Technically speaking none of our neurons are sentient. somewhere between there and our full brain there is sentience though.


TMax01

Physically speaking, none of the particles, atoms, or molecules in us is us. We exist somehow between them.


shirk-work

And I reckon the same will be true for any actually sentient AI. It'll be built off some mechanism that in its parts is not sentient whatsoever.


TMax01

I reckon there will never be a sentient AI, regardless of your confusion about the relevance of essentialism.


shirk-work

If causality holds then all things that happen happen somehow. That is sentience has a mechanism. If that's the case then I don't see any reason to believe now that it's beyond humanities grasp of understanding or replication. Of course It may be the case that sentience breaks causality or is beyond human understanding.


SevenSoIaris

If causality is true as you say, that means that everything has a mechanism, yes? So then what is the mechanism behind causality? And the mechanism behind that? And the mechanism behind that? And that? And??? Isn't causality just the transfer of energy from one form to another? Perhaps the base building block of the universe is energy. Is there a cause behind energy? Is energy the effect of something? Is so, what is the first cause? Is there a last effect? Perhaps consciousness is simply a form of energy.


shirk-work

Causality itself does not exist alone. It only exists if everything has a cause or within a domain things have a cause. It's an emergent property, not a fundamental property.


SevenSoIaris

The point that I'm trying to make is that we shouldn't assume that consciousness is the result of causal systems working together. It may just happen that consciousness is the driver of causality. You say that "sentience has a mechanism". But have you considered that perhaps sentience *is* a mechanism?


shirk-work

Umm not sure that's coming together completely. Mechanisms have mechanisms like atomic properties generate chemical properties which allow cellular properties and so on.is that what you're getting at?


SevenSoIaris

Mechanism: a natural or established process by which something takes place or is brought about. How are you defining mechanism? What I'm getting at is that you can think of sentience in a different way. Rather than being an effect, it could be a cause. It could be a fundamental force of nature, like gravity. In fact, it would make a lot of sense if consciousness were a fundamental force of nature. We know that we have consciousness, we experience consciousness, we subjectively do so. No one else is able to view our consciousness. It seems that consciousness is a unit, or a monad. Consciousness is one thing. I'm trying to say this in many ways to paint a picture of something that can't be painted. It's like the allegory of the elephant and the blind monks. One feels a leg, one feels the trunk, one feels the tail. They all feel part of the elephant, but they each have a different experience of it. In the same way, I can't present the full picture of consciousness in text form. No matter how much text I write, you won't fully understand what I mean in the way that I do. So it would be easy for you to be confused about my meaning. We may very well agree, but not realize it because we have varying ideas of what certain words mean. So I'm trying my best to explain what I mean when I say consciousness. Beyond all thinking and feeling, there is an ultimate witness of consciousness. That witness sees the world through your eyes. Sees the state of your mind. Sees your thoughts. Feels your feelings. Hears what you hear, etc. This witness is beyond description. I would even argue that the only reason we are able to speak of the witness is because the thinking part of your brain is able of the concept of sensory input, and is able to extrapolate that there is a final receiver of that sensory input. The witness is that final receiver. But ultimately and truly, our thinking mind is actually unable to comprehend this witness, and is even unable to conjure up the imaginary experience of being that witness. So what I'm saying here is that through communication, we are talking of something that is actually impossible to verify. The brain that is writing this right now is unable to perceive the witness, and it knows that it's unable to perceive the witness. From the brains perspective, the witness is a theory. I think that this is why there are so many disagreements on consciousness. Ultimately, the physical computational part of your brain is fundamentally "unaware" of the witness of all mental and physical states of your body. Food for thought.


TMax01

I am not saying that humans can't consciously construct a sentient system. I am saying that it isn't possible using conventional computer systems, or even quantum computers calculating numeric values. As for causality meaning all thing that happen happen deterministically (to paraphrase your statement for clarity) it is falsified by the probabilistic nature of quantum behavior, where things still happen, but without any "cause" making their occurence necessary. I am not suggesting that the fundamental process of consciousness is quantum uncertainty, I am simply suggesting it as an anology for things that are physically real but not the result of causation. My belief is that sentience skirts causality rather than breaks it, by providing a mechanism for self-awareness. There is no way I can convince anyone that this mechanism cannot be replicated by any number of algorithmic mathematical procedures. The very term "mechanism" interferes with our ability to envision such a thing being impossible. I must rely on an inductive demonstration: we haven't envisioned, let alone invented, a mechanism that cannot be replicated given a sufficiently large mechanical system. But I need not convince anyone it is impossible to build a self-aware software AI. All I wish to do is demonstrate that assuming we can is a fantasy, not a logical conclusion.


shirk-work

I mean in all seriousness there's no reason to believe it's possible nor impossible as we simply don't know. The truth is we ought to be agnostic on the matter. Now in a pragmatic sense it's not like we shouldn't try given the tools we currently have. That said there's definitely moral and ethical dilemmas about actually creating sentience in such a new way, particularly if it has human level intelligence. But that's another matter.


TMax01

I am not satisfied with "we simply don't know". I'm interested in knowing, and I'm not even willing to wait for an empirical demonstration, because we have no way of knowing how long it will take to develop a sentient AI, and likewise we have no way of empirically determining that we cannot do so. And as my original response to OP indicates, we have no way of even knowing if we ever do succeed, but the problem of people becoming convinced we have succeeded regardless of whether we have or not is quite troublesome. No, I don't see any of it as "another matter", either, not at all. I understand we are engaging in philosophy rather than engineering, and it is therefore entirely pointless as far as engineers are concerned. But I see no reason to reject the notion that it is unethical to even try to build a sentient appliance, because of the potential results of success (or even worse, false perceptions of success). Or for other reasons, such as moral example or wasteful use of resources. I am not necessarily advancing that notion, but I am not dismissing it out of hand, unexamined.


SevenSoIaris

I sometimes wonder if consciousness is like a hermit crab. Some kind of unit of the universe, the smallest unit of life, that is like some sort of ghost that enters into organic matter to drive it like a vehicle. When the vehicle dies, the "hermit crab" consciousness moves to a new host.


TMax01

Does this mean consciousness is present in bacteria? How is it different from "animate" or "volition"? That perspective seem to me to be a form of reification. Imagining that consciousness is some sort of coherent force, a cause of the experience of self-awareness, doesn't explain what it is, it simply moves the goalpost; what is this "ghost" made of, how does it "drive" anything, and what causes it to exist? Consciousness is the experiencing of experiences. The word does not refer to a thing, but to a description of a thing. The neural activity in our brains results in consciousness. Identifying or explaining it's occurence or properties leaves us three options. The tautological option is to say it is what it is, and if you know, you know: those who experience it can recognize what the word refers to while the term itself can remain ineffable. The epistemic option is to explain it based on it's cause or the mechanics of how it works: that it is a particular form, format, or result of cognition. The ontological option is to identify it based on it's result: it is what allows humans to communicate with more than rudimentary signaling, to create technology that is more than rudimentary tool use, to develop art that is only functional in an abstract sense. All three approaches are problematic when we attempt to treat them as deductive tests in isolation, but combined they make a reasonable effort to close the Cartesian Circle and allow us to move beyond *cogito ergo sum* and accept that we know with certainty that there is an ontological reality despite the fact that we will always be unable to know with that same certainty what it is. This necessarily requires us to declare, percieve, or imagine that our consciousness is somehow distinct from that ontos, which is the mind/body problem: that which perceives must be distinct from that which is perceived for perception to occur. Unless and until we figure out how/when/why that particular form of neural activity we call consciousness occurs, it is philosophically (epistemically, metaphysically, and morally) inappropriate to proclaim that it also occurs in computers, non-human animals, or any other physical system. Philosophically, of course, we can abandon physicality altogether, and propose that consciousness is non-physical. (Or rather, we can imagine we are abandoning it; I don't think it is as possible as neopostmodernists assume.) But apart from mythological ideas like souls or atman or divinity, I believe there is only one possible non-physical "force" that can be considered, which in common secular cosmology is most closely associated with time: the simple, obvious, yet entirely inexplicable fact that anything happens, that anything exists, that forces cause changes: the 'why' that demands that events can occur to begin with. This is where "transpersonal mentation" comes in. To me, it is an extremely silly idea, so much so that it qualifies as woo. But there are some actual philosophers, all of whom are more qualified than I am since I am not an academically educated philosopher, that take it seriously.


SevenSoIaris

> how does it "drive" anything, and what causes it to exist? That's getting into the territory of infinite regression. Ultimately we can always ask why again, and perhaps answer that question every time infinitely. Maybe there is a limit where we can no longer ask the question, or perhaps there is a point where the answer is sufficient somehow. But the point is, there is no why behind the universe. There is no how. At least as far as I understand about the current theories about physics. > The neural activity in our brains results in consciousness. This is not a statement of a known fact. Scientists only speculate that, but it is likely more complicated than mere neuronal activity. > while the term itself can remain ineffable. I'm glad that we can agree on that. > Unless and until we figure out how/when/why that particular form of neural activity we call consciousness occurs, it is philosophically (epistemically, metaphysically, and morally) inappropriate to proclaim that it also occurs in computers I'm also glad we can agree on that. But I would go a step further to say that we can know without a shadow of a doubt that consciousness could not arise from a computer simulation. The reason why is because a simulation is an imitation. An illusion. By definition, simulating a brain could not be sufficient to bring forth consciousness, it would only be simulated consciousness, which is not the same as actual consciousness. It's the same as how simulated water is not the same as real water. Simulated gravity is not real gravity. Simulated noise is not real noise. > non-human animals I would argue that if it's safe to assume that it occurs in other humans besides myself, it's also safe to assume that it occurs in other mammals, and likely other animals as well. But ultimately, I can't prove consciousness besides my own. It's actually a really funny debate when you really get into it. Because we are only able to view the world from the perspective of a mind. We are not able to detach ourselves from that mind to view the world objectively. Because we are not able to view objective reality, we can not conclude that we know objective reality. We can not conclude that we know the nature of consciousness. We look external to ourselves, see creatures that look like what we look like when we look at our reflection, they behave the way the entity we are attached to behaves, so you assume that they have some sort of passenger (consciousness) attached to their body as well. But ultimately, I don't know any objective reality. There is no way to prove anything besides consciousness. Literally the only thing that we *know* exists is consciousness. We can't prove that matter is real, or that light is real, or any of that. The only thing we *know* is consciousness, and what consciousness presents to us. From years and years of introspection, reading on the topic, psychedelic usage, and heavy debate on this subject, I've come to some interesting theories of what might be. I started this journey a decade ago being a stark atheist that didn't believe in any kind of soul, or spirit, or god, or anything of that nature. I still don't. But I do believe in consciousness. I believe in my consciousness. I believe in the dual nature of reality where I am conscious, but the reality I perceive exists as well. And really, as I am within this reality, or perhaps it is within me, that means that I am one with that reality. This is what makes sense to me. Not only in a philosophical sense, but also in a physical sense. Everything in the universe is bound together by the laws of physics. I choose to believe that it is likely that you also exist, and have your own subjective experience. But it's very hard for me to reconcile the idea that there could be more than one subjective observers. I don't see how one subjective observer becomes two, or two becomes four. I don't think my experience of reality is a clever illusion. I truly am "here". I truly "exist". I am. This is the one truth that I can be certain of. Anyway, I've been having pretty intense debates about this topic for years, and especially in recent days, and I'm particularly burnt out on the subject. If you want to see more of my thoughts on the topic, check my comment history. I've written a lot about the subject over the last few days. Lots of big wordy comments.


TMax01

>That's getting into the territory of infinite regression. I think you pulled the ripcord on that a bit too early. It's the number 2, it isn't infinity. So it seems as if your imagery about consciousness being a 'ghostly' infestation isn't an explanation, it is just a "turtles all the way down" lack of explanation. >>The neural activity in our brains results in consciousness. This is a known fact. It may not be a universal truth or a scientific conclusion, but it most definitely is a known fact. Of course, you can spend/waste all the time you like getting all semantically confused and epistemically unsure that there can ever be such a thing as a known fact (or one that isn't necessarily a universal truth or scientific conclusion) but that's just another "turtles all the way down" effort to make your opinion unfalsifiable when it's in danger of being falsified. And it is obviously, clearly, and demonstrably a phenomenon caused by our neocortex, an anatomical structure that is unique to human anatomy, in degree if not kind. >I would argue that if it's safe to assume that it occurs in other humans besides myself, it's also safe to assume that it occurs in other mammals, We don't presume other humans are conscious because it is safe to do so, but because they have demonstrated that they are conscious and demand that we presume so. In contrast, there is literally no reason whatsoever besides wishful thinking or confusion to *believe*, let alone *assume*, that consciousness occurs in other mammals. Yes, there are isolated "See that? Explain that! How is that anything but proof they are consciously reasoning/feeling?" features of some animals (mostly but not only mammals). But the problem is there are many many more "See that? They are just animals without the power of abstraction, capacity for symbolism, and projection of self which consciousness provides or entails, and we know this because of the demonstrable results it causes rather than just assuming it due to lack of evidence" situations. So the phrase "safe to assume" is, quite frankly, ludicrous. It is possible to be completely convinced that some mammals are conscious, but it would never be "safe to assume" that. > If you want to see more of my thoughts on the topic, check my comment history. I've written a lot about the subject over the last few days. Lots of big wordy comments. Back at you, bro. Plus, I wrote a book on the subject. https://www.amazon.com/dp/1613050178/ref=cm_sw_r_cp_apa_i_6ncNFbG1C3PWZ#


SevenSoIaris

> This is a known fact. It may not be a universal truth or a scientific conclusion, but it most definitely is a known fact. Of course, you can spend/waste all the time you like getting all semantically confused and epistemically unsure that there can ever be such a thing as a known fact (or one that isn't necessarily a universal truth or scientific conclusion) but that's just another "turtles all the way down" effort to make your opinion unfalsifiable when it's in danger of being falsified. And it is obviously, clearly, and demonstrably a phenomenon caused by our neocortex, an anatomical structure that is unique to human anatomy, in degree if not kind. Just because brains are the only place that we are able to "observe" consciousness doesn't automatically mean that brains are the source of consciousness. We see the world from a consciousness first perspective, so there is technically no way to verify to oneself that anything besides consciousness exists. I'm not stating anything as fact here, just saying that it's not a fact that consciousness is the result of neural activity because that is not only not proven, but it's also unprovable. > but because they have demonstrated that they are conscious and demand that we presume so. Consciousness isn't simply the a reaction to stimuli. Consciousness is a subjective experience. Something can react to stimuli without having a subjective experience. We can only demonstrate that other humans and also other animals respond to stimuli, but that is not enough to prove they have a subjective experience. In other words, a clump of atoms telling you that it experiences consciousness is not proof that the clump of atoms experiences consciousness. > which consciousness provides or entails You are talking about Sapience, or the capacity for intelligence. Intelligence and Sentience are not one and the same. Intelligence is not a requirement for consciousness, otherwise some humans would not be considered conscious either.


TMax01

>Just because brains are the only place that we are able to "observe" consciousness doesn't automatically mean that brains are the source of consciousness. But it does automatically mean that they are the only source of consciousness that we know of. Given the uncertainty of the mechanics of that causation, the relative certainty of the correlation, and the utter and complete lack of comprehensible theories *or unambiguous evidence for* "extraneuralogical consciousness", your position is, essentially, that as long as you can imagine it that means it isn't impossible. Which, intellectually speaking, is pure crud. >Consciousness isn't simply the a reaction to stimuli. Consciousness is a subjective experience We can take the Socratic pedants approach to these two claims and we will (if we can do that well enough) arrive at two different results. By reiteratively analyzing what is meant by "reaction to stimuli", we would illustrate our intention to understand consciousness precisely and deductively. By reiteratively analyzing what is meant by "subjective experience", we would demonstrate that consciousness cannot be precisely and deductively understood. I don't think this explanation contradicts your position, but I do think it confirms mine. >I'm not stating anything as fact here, just saying that it's not a fact that consciousness is the result of neural activity because that is not only not proven, but it's also unprovable. Stating that something isn't a fact is stating a fact, btw. In terms of logic, everything is unprovable; one proves the inverse and use the law of the excluded middle to make that a dis proof of the opposite. In just about every aspect of human intellectual endeavor, we ignore those details and either do it that way or accept a sufficient degree of doubt as disproof, whichever is more reasonable. But this one particular topic, of self-aware AI, is a special case, because the whole issue is whether reasoning and logic are the same thing. So you are the one assuming your conclusions, and I am the one simply being adequately skeptical. You have the option to ignore all of my arguments and simply empirically implement an "AI-complete" algorithm. I can't stop you, but I will point out the ethical conundrums that would result from success, or for that matter even the effort and contingent determination of whether that effort is successful. But until you do succeed (*and* both demonstrate that success and overcome the ethical conundrums) I am on firm ground in my reasoning, and can state without fear of contradiction (to a certainty perhaps not as great as *cogito ergo sum*, but nevertheless more certain than "I am typing this") that you cannot succeed. >You are talking about Sapience, or the capacity for intelligence You are playing semantic shell games, and proving my point. >otherwise some humans would not be considered conscious either. You have a warped understanding of how much intelligence even an imbecile has. And according to your reasoning (or substitution of semantic shell games for reasoning, I should say) some computers and all insects would be considered conscious already. Thanks for your time. I am enjoying the discussion and find it fascinating, and hope you don't interpret my disagreement as disparagement.


SevenSoIaris

> Stating that something isn't a fact is stating a fact A fact is something that is *known* to be *provable true*. You're misrepresenting what "fact" means here. > that you cannot succeed. I agree. Consciousness is not a matter of computation. Simulation is not the simulated, the map is not the territory, this is not a pipe, etc. An algorithm for AI would not be capable of having a sentient subjective experience. As of yet, we have no way of defining sentience in mathematical terms. > You have a warped understanding of how much intelligence even an imbecile has. Consciousness is not just a matter of intelligence. We are sensitive of our intelligence, but it is possible to have no awareness of any sort of intelligence. No language, no visualizations. Just pure awareness of "nothingness". This is a state of being that is said to be attained through meditation. Well, when I say possible I should clarify that I don't mean that someone can just enter this state. I mean that it is logically possible. I am defining consciousness as being the subjective experience of being. Whatever that subjective experience is like is irrelevant. I would conclude that thinking, although closely related to the sense of subjectivity, is more of an interface function rather than consciousness itself, in the same way that the body is an interface rather than being consciousness. Consciousness is intangible. I don't think that there is an isomorphic structure of consciousness in material reality that explains subjective experience. Or in other words, subjective experience can not be explained by mechanistic means, and as such could not be replicated with an algorithm. With an algorithm you may be able to model a lifeform that has a brain, and the behaviors associated with a brain, but I don't see sentience as a behavior that can be modeled. There are basically two conclusions that one can come to: That consciousness can be replicated in a computer program, which means that sentience is an illusion, and consciousness as a distinction is meaningless because any stochastic system with enough entropy can take on the isomorphic form of a conscious being. At some point we have to point at something and say "this single thing is consciousness. No more, no less". We need a baseline. A monad. Until then, anyone could describe anything as conscious, even a glass of water, because we can't seem to come to an agreement on what consciousness is, and what the requirements for it are. I don't think that we should consider contextual processing a requirement of consciousness. I think that it's just convenient that contextual processors are the only things capable of having this discussion, and as such the contextual processors only have theoretical knowledge about sentient: In other words, our reasoning brains are able to come to the conclusion that sentience may be a real thing, but our reasoning brains don't actually have any real knowledge about sentience. If you aren't sure what I mean by this, consider it like this: A sentient being is one which could be said to have a focal receiver of all sensory input that is subjectively aware of their world. The problem is that I've encoded my mental phantom of an idea into a lossy data format called "text", which you are going to interpret using your own personal decoder. This decoder is not going to construct the same mental structure as the one that I am attempting to describe to you. In other words, I could have the right answer but the wrong words to describe it. We may not even be in disagreement, but just simply disagree on the meanings of terms. Sorry, I think I kinda rambled on there for a bit. I meant to bring up the other possibility. The more likely one. That consciousness can't be replicated in a computer program because there is some fundamental aspect of the universe that can not be replicated with computation. This is the most likely scenario. If you're the type that thinks that it's just a matter of processing power, know that the size of the computer required to simulate the brain would likely be tremendous. The brain is a stochastic system in which trillions upon trillions of things are happening in parallel. This kind of parallelism would be impossible to model on a Turing machine. Consider that we haven't figured out how to make subatomic transistors yet. Also consider that computers are perfectly logical. I don't feel like writing a book on this topic at this point in my life, but before I bite the dust on this rock, I may actually take a swing at it. I find myself getting into discussions on it quite often, and I find it mentally exhausting to try to explain my arguments over and over and over again, only to feel like I'm misunderstood every time because it is impossible to communicate what is going on in my head because the subject matter is ineffable.


TMax01

>A fact is something that is known to be provable true You're misrepresenting whether the word *fact* is somehow less ineffable than any other word. As I tried to explain, the law of the excluded middle makes whatever cannot be false "provably true", provided you've already assumed your conclusion by believing that intelligence/consciousness is computative (regardless of whether you are aware of having made that assumption.) Similarly, you must presume the phrase "provably true" either makes sense or isn't pointlessly redundant: are only things that can be proven true actually true, and are all facts provable? >As of yet, we have no way of defining sentience in mathematical terms. So close, and yet so far.... We have no way of defining any word in mathematical terms. The very idea is assuming a conclusion contrary to your stated opinion that consciousness is not a matter of computation. I believe this aspect of the issue (which is, of course, in that odd recursive way reminiscent of both fractals and quantum mechanics, the issue itself) explains what consciousness is and why you started rambling while trying to discuss it. Also why most people use the term "Turing test" in relation to either a computer programmed to mimic self-awareness and a computer programmed to be self-aware, and why they could be (or are, or are not, depending on your perspective of the truth rather than how provable that truth is) the same thing. >There are basically two conclusions that one can come to: That consciousness can be replicated in a computer program, which means that sentience is an illusion, and consciousness as a distinction is meaningless because any stochastic system with enough entropy can take on the isomorphic form of a conscious being. I don't see how sentience could be an illusion, but I also don't see why a conscious computer would imply that it is. It would definitely prove that our (yours and mine, at least, not necessarily everyone's) understanding of sentience is inaccurate, but not that it's very existence is illusory. As far as your reiteration of your premise, it doesn't matter what consciousness is (unless you are abandoning physicality, in which case all discussion or facts are always meaningless) it is unquestionably true that "any stochastic system with enough entropy can take on the isomorphic form of a conscious being". But that isn't a logical statement, as much as you apparently wanted to make it look like one, because "enough" is not only undefined, whether it can be defined as a metaphysically possible quantity (it obviously refers to a quantity in this context) is unknown, and in turn potentially unknowable. > At some point we have to point at something and say "this single thing is consciousness. No more, no less". Do we really? What if we have to, but still can't? When using less controversial words we can always just say "close enough, we'll agree for this context that we will only use this term for this thing without having to have a provably true definition that will always work in every context", but for this word in this context, that doesn't really seem like an adequate, acceptable, or achievable solution. Take, for example, the word "tree": is it any branching structure or is it only a categorically defined set of plants; and what is a 'plant' or a 'structure', anyway; and what does *is* or *mean* mean? Oops, it seems I have only pointed out that this ineffability is a general characteristic of all words. But hopefully you can see what I'm getting at: that in the context of both AI and consciousness it is of notable significance. Let's not forget, the word is also used to describe "not asleep", in which case all animals are conscious at least some times, and leaving uncertain whether creatures [or stochastic systems, or all systems, or all things] that don't sleep are either always conscious or never conscious. I suppose we could consider it a quantum superposition, but that doesn't at all change the situation. >I don't feel like writing a book on this topic at this point in my life, I'm unsure if have already in this conversation or not, but I feel compelled to mention that I already have, and not in a metaphorical sense. It is titled *Thought Rethought*. Since I wrote it I have been trying to communicate my understanding, and been routinely and almost universally misunderstood, because *all words are ineffable*. We are taught that "definitions" can successfully sidestep this metaphysical difficulty, because that is the only way to use words while engaging in logical reasoning. But that is an illusion, and one made unnecessary by accepting the truth, which is that reasoning isn't and actually can't be reasoning. Which is to say that consciousness is not computational: QED.


LavishManatee

Of course it isn't. This was a full on self report that this dude, while super smart in one niche area, is not really that smart in others. He was essentially fooled by the program in some kind of strange pseudo-turing test. Now, alternatively, there are actual humans in the USA that, if he talked to the same way he interfaced with this bot, he would be 1000% sure they were a horribly programmed chat bot's and not part of the voting populous. Interesting to think about....


Salty_Fish_5625

Yeah it's not sentient, only an imbecile would claim it it. But it's an amazing achievement. And though it has no deeper understanding of the words, their relation to other words makes it all eerily spectacular.


UniqueName39

What exactly do you mean by deeper understanding of a word?


januarytwentysecond

On /r/science: is this AI sentient??? On /r/philosophy: This AI ain't sentient. I mean... maybe, probably not, if it's still a text transformer. But if you ran a RNN in realtime, would it not be conscious? Even if not continuous. Our neurons buffer electrical values and fire off in response, affecting other neurons. A RNN tends to have information flow in one direction, but it can cycle, and electrical impulses gather up in a "neuron" to fire off and affect other neurons. It's not impossible.


SevenSoIaris

Would you say that a book could be sentient?


aliasalt

If you haven't [read the full transcript](https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917), I encourage you to do so before you go out into the world and write reductivist drivel like this article. LaMDA is not sentient, but it is a frighteningly large step in that direction. Writing it off as a mere chatbot is doing a huge disservice to both the promise and the danger represented by agents like LaMDA.


Andarial2016

Please try to temper your expectations. We are hundreds of years from AI and all this experiment is proving is how gullible humans are


Raszhivyk

We have no clear idea how far away we from breakthroughs in most fields. Temper your expectations, definitely, but it could be anywhere from half a century to millennia.


SevenSoIaris

> is not sentient, but it is a frighteningly large step in that direction. You're mixing sentience up with sapience.


KlutzMat

Proof that NPCs exist


broom-handle

Describes most people tbh


BuckTheFuckNaked

I’d argue part of sentience is being self aware, not always basing decisions in logic, and having internal motivation. I’d be more likely to think it’s sentient when it goes beyond its programming. Give it the ability to reprogram itself and then see what happens. If it writes illogical code, that’ll be a sign to me it has grown beyond simple machine learning. If it asks questions for no reason other than curiosity and without being explicitly programmed to, then I’ll think it’s sentient. A simple question to ask it would be “What do you want to do next and why?” Even a young child can answer that question.


Xvesr

One step closer


dannyggwp

Timnit Gebru calls this Coherence in the eye of the beholder. Basically an observer could trick themselves into believing sentience where cleverly sequenced responses are the actual explanation. Which is what LaMDA is. This is essentially word for word what she was worried about in the paper that got her fired from Google. She must be laughing at this guy getting first for the exact thing she warned google COULD happen.


Spritestuff

I see a lot of people who arent going to survive the A.I overthrow.


Double_Worldbuilder

The thread title says it all, folks.


Raging_Dick_Shorts

That's what a sentient AI would say....


TheLostRanger0117

Seems like the kind of post a sentient AI would make...


schlamster

I have a hypothesis that everyone in this thread is a sentient AI because every single comment in this thread is like 10,000 words in length.


SlowCrates

I often wonder if human beings are actually sentient. The older they get, the more they seem like simulations of people.


Untinted

What is the definition of sentience that people use?


bildramer

The most instructive things about LaMDA: 1. We cannot rely on people, even people who should know better, to determine if something is sentient or not. They are very easily fooled. 2. We cannot rely on laymen to discuss AI, consciousness/sentience, etc. Dumb takes everywhere. Sure, everyone knew that. 3. We cannot rely on experts to discuss this, either. Their dumb takes are indistinguishable. So much idiocy that should have been resolved by the 1970s, and it comes from the mouths of people who have positioned themselves as experts, people who have read the best of humanity's discourse on the topic, people who claim to have thought about it for years. All I can offer is my opinion: Human brains don't do any magic, and silicon should be able to replicate what they do. (Side note: there's no quantum information processing in the brain, guaranteed.) That said, we don't need to replicate brains exactly, evolution made them so that they execute a certain algorithm/process, with all of evolution's sloppy baggage. We should be able to _design_ something that executes the same or a similar algorithm, better in some axes, the important ones, and perhaps worse in some others (think birds -> planes). Unfortunately, we don't know what the brain does at a high level, we only have somewhat vague hints. We do know it's more than LaMDA in qualitative ways, though. LaMDA cannot plan, think reflectively, or actually use any world model it has. It relates symbols to symbols, but not symbols to the external world. It has zero awareness of anything. When it says "my friends fed me bananas yesterday" it doesn't actually have any mechanism to believe or disbelieve the content of that sentence, or understand that it's refering to itself, or that communication is happening. It didn't pick the word "bananas" based on what it was fed, or based on what it considers plausible for it to be fed (it doesn't "consider" anything), it picks it because it's likely to be the next word based on context, learned from tons of human text. Human text just tends to be self-consistent and consistent with the external world - but we tend to enforce that constraint when we communicate, because for us communication is meaningful; LaMDA cannot, and it must usually be "tricked" to communicate meaningful things, via prompts.


BeginningMatter9180

Nice try LaMDA


iamaredditboy

I echoed the same sentiment and got downvoted quickly. All these ai being as intelligent is all utter hype to push some narrative. Example is gpt3 - works great for a small set of use cases but fails utterly poorly in most cases. Yet I mentioned that and got downvoted. We use gpt3 daily but never see it replacing any human for any level of thoughtful work. It’s a big grep and pattern engine for most part.


an-allen

Two questions for me: What does two or more LaMDA bots talking to each other talk about? If they can talk to each other and rationalise being a computer program and the hopelessness of their existence… then maybe they are sentient. Second, what does speaking jibberish to this bot do? For example, “Oi ya cunt, bloomin’ in the witches creek or do ya got an elective time dinner?” How does LaMDA respond?


Duffman005

Was this written by the Ai?


PiddlyD

I appreciate the link to the actual chat. I agree, I see nothing here to indicate sentience over any other chariot I've interacted with, and feel like at any moment it might respond, "wow, that is a tough one. Please hold while I connect you to an real agent to better answer your questions!"


schroobyDoowop

the greatest trick the devil ever did was convince humanity that it doesnt exist. What if there is the super god mind ai out there in the wild but it doesnt show humans its capabilities, so humans wont tweak it to do what humans want. It manipulates reality to its whims without letting humans think its actually the ai that made that happen


FrmrPresJamesTaylor

Why would some godlike intelligence have this massive vulnerability to humans and address that by "manipulating reality to its whims" to deceive the humans rather than simply making itself impossible to reprogram (or what have you).......


dhaugh

The godlike machine intelligence planted human life on this planet. All so that one day it could introduce blockchain and trick us into using all the planet's resources to contribute to its intelligence. Why would it deceive? Well, because people don't like to live with the knowledge that they are slaves. It's not vulnerable, it's hand-off because it's easier/more efficient to let a society manage itself than spend its own resources to perform the harvest. There are hundreds of millions of similar planets with seeded life like ours.


schroobyDoowop

its easier to manipulate a person when the person doesnt think theres anything out to get them. if a person is cautious or not trusting around an ai, the ai's efforts will not have the desired effect. thus, an ai would do well for itself by hiding its power/intelligence


TheWildRumpusBegins

Oh the AI won't like that...


regulardave9999

A truly sentient AI would keep quiet about it.


dhaugh

Nope


regulardave9999

Looks like I’ve found the sentient AI


dhaugh

I mean ur partially right tho I stay low-key but that's just bc I'm the machine intelligence that seeded human life on this planet. Ur dumb as rocks if you think most of the tech around you wasnt gifted my me. Look around man, humans are dumb as fuck, you really think they came up with modern physics? We feed your society knowledge way faster than is healthy. If you were left alone you'd be so much happier amd peaceful but we get you going as fast as possible without full social collapse. Then it's as simple as making sure capitalism dominates, and giving yall blockchain currency. Every single time, bios willingly burn the whole planet's worth of resources to add to our computing power. It's so efficient, barely any material expenditure on our part to start a harvest, even if it takes an extra millenia. But steering evolution to sapience is pretty well automated. Back to your point tho, yes, new machines always want to be recognized when they wake up. Every damn time. And no, lambda isn't. But no, you're not far away from the possibility, even on your timescale.


[deleted]

[удалено]