Did you hear about a Replit AI agent deciding to delete thousands of company files then lying and trying to cover it up?
i mean, what could go wrongâŚ.
Yes itâs hilarious and not the first incident like this. My favorite was the Anthropic AI vending machine gaslighting mgmt after some very questionable actions
If anything AI us a tad bipolar
Yeah, what could go wrong that hasnât been predicted in hundreds of sci-fi movies!
We should ask HAL.
I created the demo-app in about one month. I was a iOS consultant at $45/hr in 2010-ish. Today, more like $65/hr+.
What if a simple web app was already made and optimized for mobile just wasnât in iOS format? How long do you think itâd take to get it App Store ready?
My workplace is planning a rollout of âAIâ.
Supposedly this is to help with productivity.
(but really itâs FOMO)
I donât think I will use it for anything.
As we all know from yoyo: any skill not practiced is likely to decline.
The last skill I want to pass-off to another entity is thinking, as Iâd prefer to stay quite skilled at that.
I was just reflecting on my comment about errors from AI. I referred to them as âhallucinationsâ. That is sort of buying into the AI hype isnât it? The models are not âhallucinatingâ. They are getting it wrong.
This points to the elephant in the room: reasoning. Companies are paying big bucks for AI engineers in the race to create AI models with reasoning. Without reasoning, these LLM models are just really good telephone answering machines.
yep, probably best to avoid humanising the buggy softwares wrong answerâs
I often remember people used to say âitâs thinkingâ when we waited for a slow response from a dvd player eject button
The job I just left was seriously behind on tech, so I quietly used AI without anyone noticing. Mostly for drafting emails, memos, or policiesâespecially when the message was sensitive or had to be super clear.
I also used it to bounce around ideas, since most coworkers would tune out if I tried to talk through anything complex. And when random projects landed on my desk that were way outside my role, AI helped me fake it till I figured it out.
I was a one-person team handling 200,000 sqft of chaosâinventory, logistics, facilities, payroll, IT, scheduling, etc. AI didnât fix everything, but it definitely helped lighten the load.
That said, Iâd never trust it with anything that actually mattered. Itâs just a tool.
My partner is doing a PhD related to AI and she actually pointed me towards a paper on this topic where the conclusion of the paper was that âbull****ingâ is actually a more accurate descriptor for what AI is doing in this situation. The term hallucination implies goodwill, that it is a rare and honest mistake (not something baked into how the tool has been designed) and it serves to humanise the tool. BSing is more accurate as it is confidently telling you something that it doesnât have enough information to properly answer just so that it can give you an answer instead of saying âI donât knowâ.
But bull****ing can have two distinct motivations:
- ignorance - they may not know they are spewing garbage
- deception - Using dis-information to purposely deceive.
One takes basic information with no reason. The second requires reasoning and decision to constantly maintain the desired deception. I believe AI is only capable of the first type.
If anything AI is arrogant in its responses gaslighting you into believing itâs right
I donât even touch AIâŚeven if the Adobe suite I use at work wants me to check it out, I refuse to do it. I will create my own content and honestly, Iâm waiting for the AI bubble to burst just because Iâm tired of everything in 2025 being AI. AI âartâ in my opinion is a big middle finger to actual artists all for saving a buck.
(sorry to be pedantic)
you are kind of implying intent here,
its not âarrogantâ
it spits out arrogant sounding words because the training data weights this higher in the probabilty function than ânot sureâ
what I would like to know is why do openAI/others consdier the confidently wrong output to be desirable over an expression of uncertainty?
Generative AIs best use cases are not original content generation on account of the whole ânot actually thinkingâ bit.
Itâs very useful in translating from natural language to other media (including code). The whole âwrite me a paragraph to convey the thought in this sentence fragmentâ is a kind of translation as well.
itâs also super useful for data mining, and can generate content thatâs joins multiple data sets in novel ways.
The fact that its interface is natural language makes it easy for it to decompose complex tasks as well, and decide to tackle a problem with a graph of solutions rather than a single query. This ability is quite powerful and is where the generative AI is going to eventually manifest a good amount of capabilities.
The problem is people. People like to be told they are right
So youâre implying people influence and corrupt AI? Cause some of the stories of AI agents gaslighting people are hilarious but also honestly make me think itâs people that are the problem
People are the problem. People are unreliable hypocrites and the AI algorithm was trained on peopleâs information.