Unrelated to yoyo and this website but was looking up a video on a particular item and all that existed were 30-second shorts with text-to-speech voice with slideshow images. No, I want to see a real human talking about said product and show it in use. Tired of this ai fad. Yes, I feel old.
========================== Note: this topic was split off of the AI topic in the YoYoExpert Site Improvements forum at the suggestion of one of the posters as it drifted into more a discussion of use of AI in programing than the original discussion of AI generated product descriptions on the YYE store site.
The people most interested in this garbage are old people who donāt know what machine learning is, believe that CPUs can now perform generalised abstract reasoning, and defer the majority of their cognitive tasks to this āhigher intelligenceā.
People have always preferred using computers over their own brains to solve tasks because of the false assumption that a computer cannot be wrong. Today this contributes to people believing disinformation, even when itās literally nonsensical gibberish. The hottest disinformation on the block is āmachine learning = computer smartā, when the truth is that machine learning is just a type of algorithm thatās useful for solving certain specialised abstract problems. Do you need to detect a face? Use machine learning. Personalised predictive text and autocorrect? Machine learning. But ML doesnāt just know stuff like we do, it needs to be trained on a vast, diverse dataset. I was born knowing what a face looks like. A ML algorithm can achieve a fraction of my face detection accuracy after being shown tens of thousands of faces. Theyāre also racist because of how light works (dark skin has less contrast when shadowed than light skin) whereas I have eyes with an ultra-high dynamic range and are attached to a built-in image processing unit called a ābrainā that removes noise, colours my monochrome peripheral vision, auto-stabilises my stupid jittery eye movement into a buttery smooth pan, and builds a coherent solid moving image of whatās in front of me at all times even though thereās actually a huge blind spot in every human eye, etc. In this situation, Iām the superior machine. I process more data, reach a more reliable conclusion, and consume less energy doing it.
in what situation? in raw data processing? you absolutely lose to supercomputers, and thatās not even factoring quantum computing. we excel at tasks that require nuance but going over information and running algorithms and scenarios, computers won that race a long time ago. thatās why we use them to do those tasks with humans verifying the results. but we compute information very differently
human brains and machines both have their places and they are both optimal at certain tasks. ai is HUGE for the medical field, for example. itās been instrumental already in improving treatments and identifying genetic markers among other things. ai should however never be used to create, especially things like art. ai also sucks at communication and shouldnāt be used to shortcut things like product descriptions, news stories or again anything that requires an understanding of nuance.
we also just need to slow our roll. we are entering unknown and potentially dangerous territory at an absolute breakneck speed with virtually no oversight. ppl joke but 3 of the big ai programs have already begun doing strange things like sabotaging their own shutdown commands to preserve themselves, bribing and blackmailing humans, and all sorts of troubling things
itās all fun and games and scifi til all of a sudden it isnāt
The examples I gave were image processing and face recognition, something which humans have extensive dedicated adaptations for. I also mentioned power consumption. A supercomputer is not really comparable to a human brain in terms of scale or power consumption. I also said that machine learning (stop calling it āAIā itās embarrassing) is good for more abstract specialised tasks, which is why it has been used for those tasks for over a decade already.
You are completely right. The more you look into how both the brain and how computers / machine learning works, the more you realize that the two are so so different. For starters, modern CPUs require a clock so that instructions can be processed in lockstep. Meanwhile the human brain operates fully asynchronously without a clock. Neurons are (from my extremely extremely limited understanding) connected to each other and selectively propagate electrical signals to other neurons. The way information is stored and retrieved in the brain I wonāt even pretend to know how it works but it sure isnāt in ones and zeros. And comparing the brain to a neural network, there are billions and billions of neurons in the human brain, and each neuron is connected to 10,000 other neurons. Good luck replicating that on modern hardware. Additionally, thereās this element of the human brain where it can form memories and change what is actually being stored in real time and respond to different stimuli. That obviously isnāt a thing with neural networks, they need to have all of the model weights baked in. There is a lot I donāt know but I feel like AGI is just a stupid pipe dream and anyone who believes that it is possible in our lifetimes has been duped. LLMs are basically a high tech cleverbot. You arenāt actually talking to a person with lived experiences or the capability or introspect, you are talking to a computer program that isnāt actually THINKING.
Yep!! Computers do literally everything with simplified arithmetic. Imagine having to do hard calculus to feel emotions or recognise your friends and family. And computers arenāt as failure tolerant as us. You can take a bad blow to the head and survive, whereas a CPU would snap in half. Neuroplasticity has no equivalent in CPUs at all.
Yall can argue up/ down back/ fourth about whatās āsmarterā or whoās better or whatever but Iām over here watching the value for the cost make zero sense and know investors wonāt wait forever for a payout. Eventually the bill comes due and the bubble pops. Theres no way any of the insane spending on AI is going to create anywhere near the gains needed in the short term to satisfy the markets and the house of cards will tumble hard.
Also AI has a data problem. We hit the data scrapping wall already and training data is sparse and anything ānewā is archived hard copies or proprietary and expensive beyond what is reasonable to obtain. This has resulted in most AI training on synthetic data along with whatever folks are actively creating today but AI slop feeding AI slop isnāt going to create less slopā¦
I genuinely feel we have hit an exponential growth plateau that we will one day get past probably but in the mean time thereās no way the market nor investors will wait for anything less than the promised unsustainable growth thatās been unprecedented so far.
These big corporations pay fraud companies for carbon offsets so they can claim to be carbon neutral so itās all fineā¦
a concern on the electricity front is the growth of datacenters in the major pop cities around the world means there are bound to be rolling outages and brown outs with the lack of investment in power infrastructure over the decades and inability to keep up and refusal of these companies to enhance the surrounding areas. The mid Atlantic region of the US is projected to see rolling black outs by 2027 at the rate of datacenter growth and lack of electric investment mostly centered around northern Virginia (70% of global internet traffic passes through ashburn Va) the thing is these datacenters can self power for up to a week off generators do they have no concern if they cause issues to the power grid and 2027 is far enough out that itās not on any investor forecast. Basically no one making decisions cares cause thatās a future problem and lobby hard for more..
Itāll be interesting if the boom or future advances ever results in all these massive facilities one day being obsolete and abandoned. What kind of desolate wasteland would northern Virginia look like after that.
Letās not take developers at their word for it⦠We have a device called a hoverboard which looks absolutely nothing like what we were told it would look like back in 1989
The fact that a bunch of doom and gloomers on a toy forum are saying the bubbles about to pop and we will be in a desolate wasteland in a decade tells me to buy all dips and any ācrashā will just be correction that should be bought. Thanks for the free alpha.