- Edited
James
i thought this was good food for thought.
Beware being beguiled by the gingerbread house traps.
James
i thought this was good food for thought.
Beware being beguiled by the gingerbread house traps.
Georgina Woodward
Tristan Harris is the co-founder & president of the Center for Humane Technology. The website https://www.humanetech.com/ has summaries and concise discussion of the issues.
Lorraine Ford
I recommend watching the video I posted,
Georgina Woodward
Some people actually have busy lives, interacting with the real world, and they don’t have time to spend all day passively watching hour-long videos on screens. Seen-at-a glance summaries and concise discussion of the issues are essential.
The AI monster is triple headed-
Georgina Woodward
More stuff and nonsense from Georgina Woodward, barking up the wrong tree, again. Without having the faintest clue about how computers/ AIs are actually made to work, she has concluded that they "think", are intelligent, "[Know] all about how humans think and act", and can "misdirect, deceive, manipulate and control". Her lunatic ideas (no doubt she has been watching rubbish videos, again) are clearly a consequence of her total ignorance about how computers/ AIs are made to work. Barking up the wrong tree is not the way to find solutions to the genuine problems that AI is causing.
Lorraine Ford
"technology’s rapid evolution makes it difficult for businesses or governments to assess it or mitigate its harmful consequences. The speed and scale of technology’s impact on society far outpaces safety processes, academic research timescales, and our capacity to reverse the damage it causes." Centre for humane technology.
Reckless endangerment, where potential harms are already suspected and warnings have been given by developers is a serious concern.
Lorraine Ford
Whether future AI can said to think is woth thikingg about. The processing of information is disimilar but creating a new priduct from information looks superfificially like thought. To be precise to think and to know etc are probably best reseved for bilogical organisms. I do not know if there is precise vocabulary i should use.i have heard the term' machine learning' much used torefer to a precise technique that applies to machines ,Maybe 'machine-thinking' and ''machine-knowledge'' would help emphasize the diffences while appearing similar, avoidingt echnically incorrect shorthand expression of ideas, about the subject of what machines can do.
As I said, you seem to be a textbook example of the problems to society caused by internet videos, social media and AI use. What seems to happen is that people watch a few videos, and then they seem to think that they are experts on some subject, when they in fact have no broad or in-depth knowledge whatsoever.
Most people, physicists included, actually don't know the details of how computers/ AIs are made to work. People like myself who DO know the details of how computers are made to work would rarely leap into woo-woo, and conclude that computers/ AIs are now, or could someday, be conscious or creative. So yes, a better vocabulary is required.
An example of 'machine-thinking' being different from what we would assume , by comparison with human thinking. Highlighting the importance of knowing how conclusions are reached by the machine rather than trusting black box systems to be accrate, truthful , unbiased etc.
Georgina Woodward
There is no such thing as “machine-thinking”, or “conclusions … reached by the machine”. In every, EVERY, way, the machine is a man-made creation. The machine in essence is forced to shuffle the input symbols in a particular way, just like a ball on an incline is forced to roll downhill, and people then interpret the screen or ink-on-paper symbols (pictures, words, sentences), or the sound wave symbols, that are output by the machine.
Lorraine Ford
Remenber my mentioning, n a previous post, that I do not know of the correct terminology for the computer processing that gives output that seems like thought but is not. If we reserve that word 'thinking' for what biological organisms do, when they process information. I suggested machine-thinking could be alternative terminology, that like the term machine-learning 'aknowleges both the difference of it from the organic process and aknowleges the appaent similarirty.
'Conclusion' is a term that could mean something like a judgement from reasoning . It also can mean the end of a process. That second meaning seems fitting for the output of a LLM -a wolf. (Generated as sound ot rext understood by humans. ) End of process.
Georgina Woodward
I think terminology by itself is pretty pointless, unless people are very clear about symbols: the written and spoken and binary digit symbols, invented by human beings, used as a type of tool by human beings, and which only have meaning to human beings.
Unfortunately there are an awful lot of people, including philosophers, who make the mistake of thinking that symbols have objective meaning, when in fact symbols only have subjective meaning to particular human beings (although no doubt some non-human animals can be trained to associate some personal-to-themselves meaning in connection with visual or sound symbols).
I think it is a somewhat difficult issue, because when one looks at words (e.g. on a screen or paper), it is difficult to separate out the physical symbols (on the screen or paper) from the experience of knowing the meaning of the symbol, which has come from the eyes and brain processing light waves, and from learning the meaning of the symbols at school.
Lorraine Ford
This article has interesting insight into the importance of the words we use to descibe something that occurs frequently and problematically with LLM s
https://undark.org/2023/04/06/chatgpt-isnt-hallucinating-its-bullshitting/
William Orem
More about the occurance and prevalence /problems to AI users of 'hallucination
https://techstrong.ai/generative-ai/the-ongoing-challenges-of-ai-hallucinations/
Georgina Woodward
Fragmented truth: How AI is distorting and challenging our reality Gary Grossman July 30 2023 ,Venturebeat
https://venturebeat.com/ai/fragmented-truth-how-ai-is-distorting-and-challenging-our-reality/
Georgina Woodward
Australian human rights comission,'Weapoised' AI An existential threat to truth, human rights
This opinion piece by Human Rights Commissioner Lorraine Finlay appeared in The Australian on Monday 15 May 2023.
https://humanrights.gov.au/about/news/opinions/weaponised-ai-existential-threat-truth-human-rights
Please ensure your comments are respectful and refrain from personally abusing other forum users. Violations will not be tolerated. Review our community guidelines for more information.
Georgina Woodward
Watching rubbish videos that deny that anthropogenic climate change is happening, or that claim that Trump won some USA election, is a far more immediate threat to truth and human rights than AI. But plenty of people quite happily watch rubbish videos all the time.