I agree too. At the end of the day (and simplifying it a LOT), it's just list processing, statistical anaysis and a bit of report writing styled by what the computer already knows.
Not exactly, even when oversimplified. All mentioned actions are algorithmical in the sense that someone creates algorithms, someone codes them into computer language and it is generally known, what kind of results you got from given data.
Deep neural networks are a bit different beast. You don't write data processing algorithms - you create learning/training algorithms, but you can't guess, what kind of internal relations the training of such network creates. I think many scientists are trying to anlayze and understand resulting structure - I don't know, how much they succeed.
As you don't know such neural network internal structure, then you can't exactly predict, what kind of properties (and when) may emerge from it. It is possible that intelligence is simply emergent property of complex data processing in our brain - and if it is so, then nothing will stop AI from gaining real intelligense either. I can only hope that this won't happen soon.
Even if you cannot define an algorithm does not mean it doesn't exist. DNN's are still algorithmic, but at a complexity that we can only define before creation. We are adding many different inputs and allowing the resultant emergent properties to feedback until the complexity exceeds what we can easily track.
This bears no semblance to how living beings think. Any of the scientists and researchers involved will tell you that.
Do you happen know any links/articles about how living beings think, preferably on layman level (but scientifically sound)? I have not researched this topic, but would be interested in background information - I may have been completely wrong in assessing current AI possibilities.
I read an article/study about this last year. The specialist who did the study says that no one knows how the analysis and decision factor in the brain works. There was not even the slightest clue that could launch a study/research plan about this.
Observation Limits: 2025 studies (e.g., thalamus research) measure electrical patterns but can’t explain why certain signals trigger awareness.
Consciousness Gap: Experts like Christof Koch (2025 interview) admit we don’t know how neural correlates (e.g., 40 Hz oscillations) produce subjective experience.
Decision Mystery: A June 2025 Nature paper highlights that decision-making models (e.g., Bayesian inference) predict choices but not the internal logic—leaving the mechanism opaque.
Unknown Mechanism: The brain’s info processing and decision-making remain a black box—e.g., how a thought becomes an action isn’t mapped.
Like AI, the brain processes inputs (sensory data) into outputs (actions) via neural networks, but the mechanism—how a neuron firing becomes a thought—remains unclear. The 2025 thalamus study tracks signals but not their “why.”
Selective bias from the specialist! From the point of view of "no one knows". He just hasn't met anyone who does...
I don't think "one" neuron firing becomes a thought. It is a process of multitudes of neurons firing. And if you think of the possible combinations and probabilities of any particular set of neurons firing, you quickly get into rather large numbers like the number of atoms in the universe...
None of these people study the mind. Just like physicists used to believe that everything could be described by Newtonian Mechanics, and then along came so-called Quantum Mechanics, the mind exists beyond the physical brain. The brain is just the physical interface of what happens in the mind, just as the physical world is the outcome of what happens in the quantum world.
If you want to delve into the mechanics of the mind, and more broadly the deeper realities of life (as in existence), the study of Yoga is the only science I have found that begins to paint a plausible picture...
I doubt ai will ever trigger a past thought or image via a combination of smells or music. They say 1 human cell has more information stored than 100x the information ever written in history.
TLDR: ChatGBT has relatively no memory, no goals, and positive/negative reinforcements, all of which would seem necessary (but not necessarily sufficient) for "true" intelligence. As I said in my "conversation" with it above, LLMs are like an autistic savant with severe dementia.
Unknown Mechanism: Neurologists (e.g., 2025 Skeptical Inquirer) argue the black box—how yoga’s physical poses or breathing (for example) translate to cognitive choice—lacks mapping. fMRI studies (2024) show activity shifts, but not the decision code.
Studying yoga doesn't get you anywhere relative to the subject at hand. No one has figured out how yoga affects decision-making in the brain.
The decision-making process in the brain can be influenced in many ways, but no one knows how it works.
"After 13 years of effort, the OpenWorm project’s failure to simulate a simple nematode reveals profound gaps in our understanding of biological complexity.
Meta Description: Despite having just 302 neurons and being one of the most studied organisms on Earth, the C. elegans worm has defied complete computational simulation for over a decade, challenging fundamental assumptions about biological complexity."(medium)
we are so far away from understanding the human body. its only been the last 3 years that we can now look at the micro machines through the greatest microscopes ever made, and even then is only a very blurry image. the gut has been discovered to have its own brain and receptors. it was fun watching my daughter do an experiment on my 88yo mother as to show how the gut/ intestine receptors and the brain comunicate.
The intellect is only one aspect of the mind. Transcending the intellect is one aspect of Yoga and is necessary to experience the higher states of mind.
Yoga is an experiential science with focus on self observation / discovery. The Philosophy is helpful in conjunction with the practices but on its own doesn't give any results.
Using the intellect to comprehend the totality of mind will never work. It is said that once higher states have been experienced their comprehension by the intellect follows - but at that stage the points of reference of the intellect have changed.
There is a tendency for academia to assume it knows everything and / or that it is the only place where higher knowledge can exist. Some scientists have the awareness that the more they come to understand, the less they actually know. Those are the ones that tend to advance understanding because their minds are open.
There is so much in there I could comment on but I don't want to write an essay. This just popped out and it is really relevant to me so I will start with it:
The machine's (LLM) answer to your suggestion that it makes sense that it is not possible for it to perform true thinking (The machine had already stated that it can't think - all its replies are derived from statistical analysis). It was also where you introduced the concept of "an autistic savant with severe dementia."
"It’s not thinking. It’s performing the appearance of thinking — at scale, and beautifully. But not yet deeply."
And I really like how the machine introduced its own TL/DR summary at the end of its quite verbose (but meaningful) analysis of your questions:
"Your analogy is spot on: an LLM without memory is a brilliant, context-bound savant with no capacity for real reflection, learning, or identity."
I am incredibly impressed with how the machine can piece together relatively complex answers with plenty of detail in such a way that the reader would assume they are conversing with an intelligent entity.
For those that don't want to read GB's chat with "The Machine" (it's quite long), the machine had already explained to him how it achieves its answers from comparisons drawn from its training. It "knows" (more correctly it predicts) what intelligent answers sound like from so many examples it has available to reference. It quite happily admits that it creates the answers and the layout / structure of those answers purely on a statistical basis.
so after doing some digging my daughters papers that she has written over the years at school and university that were assessed by a board of lecturers said her assignments were way above there level of academia could have been used to feed LLMs ?
i asked the question.
Yes, universities can and do contribute data to large language models (LLMs). Universities are hubs of research and knowledge, generating a vast amount of text data through various activities like academic publications, student assignments, and research projects. This data can be used to train and improve LLMs.
It seems you're limited to ChatGPT, which does an excellent job in most situations, but there are many other AI agents out there, and some do impressively well at understanding context and user intent. Even ChatGPT offers different communication modes for different types of interactions.
AIs have already been created that evolve on their own and rewrite their own code whenever they think they have found better solutions. For example:
"The Darwin Gödel Machine is a self-improving coding agent that rewrites its own code to improve performance on programming tasks. It creates various self-improvements, such as a patch validation step, better file viewing, enhanced editing tools, generating and ranking multiple solutions to choose the best one, and adding a history of what has been tried before (and why it failed) when making new changes."(sakana.ai)