• Members 1409 posts
    April 27, 2025, 7:12 p.m.

    I don't really know how that will work or how effective it could / will be.

    I consider the massively parallel network of our brains to computer networks. There are up to 100 billion neurons in our brain. Each neuron has an average of 1,000 synapses (connections to other neurons) but there can be up to 200,000. If we consider a neuron as storing a piece (bit / byte whatever) of data (a poor analogy but I don't have another), how does that compare to a computer system? In the computer world we can store an incredible amount of data. But how connected to each storage site is that data? In computer storage memory there is no connection between stored data other than retrieving the data we want to processor memory and acting on it there. Processor memory is quite limited in size compared to storage memory. We can network many processors together but what is the physical limit? We cant connect anywhere near as many processors together in a massively parallel sense as neurons are connected. We can make and network groups of processors and then network those groups together and on and on and get something quite large and distributed. But there always needs to be control layers that dictate what all these connections are doing. Those control layers grow exponentially the more we try and expand the network. Every process (from data retrieval to processing in local groups, to processing in larger groups, to the layered control that is coordinating the whole thing) that is occurring in this system can only do so via explicit instructions. Each process produces an outcome. Then higher level control can group lower level outcomes. Each step requires a branch in the code / explicit instructions. Now it is possible to keep adding layers that branch control giving higher level outcomes. At some point we may have a required outcome or we may need to go back to some prior point and reprocess with some other parameters (which requires branches in the code). Once we have an outcome it is possible to store it so we don't need to recalculate it. That step may be considered by some as learning. The system could store many outcomes. What happens when we want to retrieve a stored outcome? Another control layer has to decide which relevant outcomes are the answer or solution to retrieve - once again explicit instructions. We can grow the system as large as we like (as is possible given size and power constraints - note they are already talking about the large amounts of electrical power these "ai" systems are using) but sitting over the top of it there is always a control layer that operates from explicit instructions. The uppermost layer can't learn - to change it needs some external input.

    All I am trying to say is that it is physically impossible to create a computer system that can mimic the thinking / learning capacity of our brains. Contrary to what Hollywood would have us believe - that Skynet became self aware (another whole level of consciousness above a networked system requiring sensory input) - it just isn't physically or computationally possible.

    The media love it because it fuels their existence. Some academics love it because it gives them cause or relevance. The gullible suck it up and just like a conspiracy theory it grows its own momentum until at some point the wheels fall off when reality finally hits home.

    Now that's not to say that a system couldn't generate images on demand but I am struggling to see how a system could realistically generate the whole gamut of potential images we all capture. It may give me an eagle in flight but how big would the system need to be to give me many variations on all birds / wildlife / flowers / landscapes / cityscapes etc? It would require a ridiculous amount of stored images / data to draw from and then it needs the ability to present what it creates in a realistic / plausible way...

    Phew...😬

  • April 27, 2025, 7:18 p.m.

    Very well explained - for a computer system

    But how does a human brain recall facts and make assumptions? I think the issue is that no one actually knows.If we did, we might be able to duplicate it. But all AI system are just a guesswork as to trying to mimic the human brain. Or possibly not even going that far. Let's be realistic, they are just enormous very fast list processors with a bit of extra processing on top.

    Alan

  • Members 1409 posts
    April 27, 2025, 7:42 p.m.

    I can't begin to think I know but I truly believe the answer is there to be found in the higher levels of Yoga - and not in books - it's an experiential thing. The books can only lay out the groundwork / framework - necessary steps.
    My question is - what is consciousness?

  • April 27, 2025, 9:06 p.m.

    Don't be so sure. Deep neural networks create connections - like there are between neurons; how good is the entire architecture (compared to brain architecture), is not so clear.

  • Members 809 posts
    April 27, 2025, 11:47 p.m.

    Here's one for AI-bashers. I briefly subscribed to an AI site and asked for picture of a Praktica MTL-3:

    AI at left
    kronometric.org/phot/camera/AIvActPraktica.jpg

  • Members 2414 posts
    April 28, 2025, 4:05 a.m.

    looks artificial to me, thats how they solve the problem of getting sued.

  • Members 741 posts
    April 28, 2025, 4:15 a.m.

    In my opinion, consciousness is simply self-awareness. Regardless, I think a more utilitarian approach is necessary for the definition. That is, why would anyone care if something is conscious? For example, if you eat red meat, and learn that cows are conscious, are you going to stop eating red meat? Very unlikely. You'll either change the goal posts on what consciousness is or not care that you're killing and eating a conscious animal, in which case, we're back to the utilitarian question: what does it matter if something is conscious?

    However, that aside for the moment, neither film nor a digital camera work like the human eye. Yes, they have elements in common, but they do not operate in the same way. One wouldn't say that film or digital sensors couldn't make photographs or video, right? Likewise, even if we take it as a given that AI works rather different than the human brain (that is, it has elements in common with the human brain, but is not the same), that doesn't mean that AI will not be able to think or be conscious. Again, for me, the test would be if AI were self-aware, but how do you test that? Regardless, if AI produces "photos", or processes "low quality" photos into "high quality" photos, and no one can tell the difference, what does it matter?

  • Members 741 posts
    April 28, 2025, 4:16 a.m.

    The one on the left is AI.

    Told ya! 😁😁😁

  • Members 270 posts
    April 28, 2025, 7:54 a.m.

    I think we may be over thinking this...

    How do we create images? Not just "point a camera and press a button" as that only leaves the question of "what is an image we find interesting?" And that's an easy one as it's generally lowest common denominator for the mass market, cats, puppies, babies, vivid colour, high contrast "stormy sky" B&W with a dose of long exposure motion blur. None of those are particularly hard to generate given that AI centres already have access to Social Media and there's probably a reason that Cloud Storage/Backup is being pushed as the default option these days, (the Save command in PS now defaults to the cloud).

    Other than that we look and copy, then the "thinking" is simply a problem of understanding labels, translating an instruction. And I don't think we can easily dismiss just because small details are incorrect "first time" as even humans draft and refine until we get the finished image. And all 2D images are abstractions, including the human understanding of 3D space, they all differ from reality. So the problem is not "recreating reality" but finding a distortion that fools us, which is simple when you add "popularity" as a marker against an image.

    Yes but... The human audience will always, automatically and without thinking, equate understanding to their personal experience of being human. Just like we look at a dog and interpret it's facial expressions directly to human facial expressions and the human emotion we associate with that. Hollywood is well aware of this contradiction, that for the audience to make a connection with the "motivation" of the machine it has to have a direct connection with our understanding of being human. The whole idea is just a projection of an irrational human fear onto a vehicle that allows us to believe there is no remorse in the actions...

    We also equate intelligence to our personal understanding, and not the collective behavior of a group.

    The danger of AI is that is has no conception of morality and therefore can be bent to the will of those "bad actors" who are motivated by personal goals, then if we add in the group behavior and response as in how and what we respond mostly to as a group, then the problem of the second and third generations of AI may well be what it's learnt as the definition of "success".

  • Members 809 posts
    April 28, 2025, 11:33 a.m.

    Personally, I agree.

    I like "AI" but only to use it to get information e.g. "what is Y'CbCr", NOT to write articles for me and NOT to create stupid images based on some text input

  • Members 741 posts
    April 29, 2025, 2:51 a.m.

    I would argue that's true for a huge number of people, and perhaps the majority, unless you consider "morality" to mean "It's OK when I do it, but when others do it, it's bad".

    There's nothing stupid about this:

    www.reddit.com/r/ChatGPT/comments/1k9yow9/chatgpt_omni_prompted_to_create_the_exact_replica/

    😁😁😁

  • Members 2414 posts
    April 29, 2025, 5:10 a.m.

    the only way AI is going to mke an image like this is if they pay me 😁 to use it.

    web sp.jpg

    web sp.jpg

    JPG, 2.3 MB, uploaded by DonaldB on April 29, 2025.

  • Members 741 posts
    April 29, 2025, 7:34 a.m.

    I know it's not the same, but this is what AI can currently do (from pixabay by Kyraxys):

    dprevived.com/media/attachments/0f/2a/yUt5lLKdgltOCw3cK8aMYiZV9Y3M2F4clWUITBRTDWytwWO6AfyuS6SdBw8v1sZE/ai-generated-949.jpg

    ai-generated-9495734_1920.jpg

    JPG, 482.2 KB, uploaded by GreatBustard on April 29, 2025.

  • Members 2414 posts
    April 29, 2025, 8:48 a.m.
  • Members 531 posts
    April 29, 2025, 2 p.m.

    I'm not remotely knowledgeable about how modern AI systems work but I believe if I recall my Marvin Minsky, they make use of neural networks - at least simulated neural networks rather than physical ones - and neural networks are inspired by the functioning of actual brain neurons. In the sense that they are networks of nodes that either trigger or don't trigger an output depending on the weighted sum of the inputs. What gets trained is the weightings in each 'neuron' and the training is an iterative process whereby the correctness of the output is measured, then the network weightings are tweaked until the final output is as correct as the network can achieve.

    It is not the same as a biological brain, but it is carrying out low level processing at least in a way analogous (somewhat) to a real brain. And being silicon it is very much faster.

  • Members 243 posts
    April 29, 2025, 7:56 p.m.
  • Members 2414 posts
    April 29, 2025, 11:51 p.m.