demon_llama999
Member-
Posts
1 -
Joined
-
Last visited
Everything posted by demon_llama999
-
Discussion / debate with AI expert Eliezer Yudkowsky
demon_llama999 replied to Ross Scott's topic in Other Videos
I had some thoughts while watching this interview, but it was very long, so I apologize if this was discussed, and I just missed it. I think motivation and spontaneity is a good test to determine if intelligence is artificial or actual. The example of single cell organisms can be useful constructing for a general intelligence test. Such organisms will move towards food and away from danger. This is basic motivation: the motivation to not die. As far as I can tell, machines lack this essential characteristic. I bought a Deik robot vacuum a few years ago (meaning they are probably much better now and mine was not top of the line even then) and it was more amusing than practical. After repeated trials, it could not figure out how to “not die”. Getting stuck under the refrigerator or running out of battery under my bed (seriously, if I couldn’t find it, that became the first place I checked) were the top two causes of death. To this day, it has not learned to not die. Its charging port is always in the same spot, but it only occasionally manages to get there before running out of battery. So, I propose the survival test of general intelligence. If a machine, without any training at all, can figure out how to not die, regardless of job description or vessel, as even a single-cell organism will do, that meets that criterion. Since it is possible to get yourself killed, the second part of the test is how much value it places on the experience if it were to have the equivalent of a near death experience. If I were my robot vacuum and I got stuck under the refrigerator and almost died, I might be traumatized and be terrified of refrigerators. There is no gradient descent on this one, no repeated trials are necessary – refrigerators = death. My Deik is not likely to experience flashbacks or nightmares about the many, many times it got stuck. So, this test is about learning without the need for instruction or repeated trials when survival is at stake. When discussing artificial intelligence in terms of the individual tasks it can do, I think it does not capture general intelligence or general AI very well. When I am determining how intelligent my robot vacuum is, its ability to be a vacuum is not a significant part. If it went off-schedule to do donuts in the living room for no reason, I think that is more indicative of intelligence because it grasped the concept of “being” and spontaneity. I would also be impressed if it decided to do laps like it was a NASCAR driver or started responding to my dog’s attempts to play with it. I know it is a bit ironic to think that the things that people do that make them look like idiots are what I use to test for intelligence, but spontaneity is an often-overlooked feature of higher lifeforms that is absent in AI. I think arguments pertaining to tasks are useful to argue that AI is a tool in its current state. The question of “how dangerous is it?” depends on how the tool is used. For example, stock market bots can be optimized to adapt to a volatile market and make profits from very small changes. In a volatile market, the use of bots can exacerbate market movements by executing trades at a rapid pace. This can result in sudden and large price movements in either direction as the bots buy and sell in response to changing market conditions. In this case, the harm is caused by humans whose irresponsible use of a tool can cause instability in the market. From this perspective, it is more like talking about how guns are dangerous: not very if locked in a box and tossed into the ocean. In the hands of the malicious or incompetent individual? Very. So, the next question is when does the tool become sentient. I think emergence theory is the best framework for that question. Attached is Chat GPT’s take on the subject, which I thought was an interesting thread on multiple levels. There was one aspect that I learned in a Cognitive Science course that it did not discuss: changes that lead to need not just quantitative, but qualitative differences. When does a bunch of ants become an ant colony? There is a correlation between the number of ants and the presence of hive mind, but having more ants does not cause hive mind. Similarly, there may be a correlation between quantitative changes (more CPU cores, RAM, training data, nodes, etc.), and sentience, but those things do not cause sentience. Will these changes result in qualitative differences? Then I think the statement "increasingly refined AI will lead to true intelligence" is partially true. The refinements were necessary, but it is not clear when or if it will happen and what else may be involved or if those other things are a direct or indirect result of the refinements. Interesting interview. I enjoyed watching it and thank you for all the other videos you have posted. They really make my day! chat_GPT-emergence_theory.pdf