Who is using AI at work?

Did you hear about a Replit AI agent deciding to delete thousands of company files then lying and trying to cover it up?

6 Likes

i mean, what could go wrong….

5 Likes

Yes it’s hilarious and not the first incident like this. My favorite was the Anthropic AI vending machine gaslighting mgmt after some very questionable actions

If anything AI us a tad bipolar

1 Like

Yeah, what could go wrong that hasn’t been predicted in hundreds of sci-fi movies! :wink:

2 Likes

We should ask HAL.

1 Like

I created the demo-app in about one month. I was a iOS consultant at $45/hr in 2010-ish. Today, more like $65/hr+.

2 Likes

What if a simple web app was already made and optimized for mobile just wasn’t in iOS format? How long do you think it’d take to get it App Store ready?

My workplace is planning a rollout of “AI”.

Supposedly this is to help with productivity.

(but really it’s FOMO)

I don’t think I will use it for anything.

As we all know from yoyo: any skill not practiced is likely to decline.

The last skill I want to pass-off to another entity is thinking, as I’d prefer to stay quite skilled at that. :winking_face_with_tongue:

4 Likes

I was just reflecting on my comment about errors from AI. I referred to them as ‘hallucinations’. That is sort of buying into the AI hype isn’t it? The models are not ‘hallucinating’. They are getting it wrong.

This points to the elephant in the room: reasoning. Companies are paying big bucks for AI engineers in the race to create AI models with reasoning. Without reasoning, these LLM models are just really good telephone answering machines.

4 Likes

yep, probably best to avoid humanising the buggy softwares wrong answer’s

I often remember people used to say “it’s thinking” when we waited for a slow response from a dvd player eject button

6 Likes

The job I just left was seriously behind on tech, so I quietly used AI without anyone noticing. Mostly for drafting emails, memos, or policies—especially when the message was sensitive or had to be super clear.

I also used it to bounce around ideas, since most coworkers would tune out if I tried to talk through anything complex. And when random projects landed on my desk that were way outside my role, AI helped me fake it till I figured it out.

I was a one-person team handling 200,000 sqft of chaos—inventory, logistics, facilities, payroll, IT, scheduling, etc. AI didn’t fix everything, but it definitely helped lighten the load.

That said, I’d never trust it with anything that actually mattered. It’s just a tool.

3 Likes

My partner is doing a PhD related to AI and she actually pointed me towards a paper on this topic where the conclusion of the paper was that “bull****ing” is actually a more accurate descriptor for what AI is doing in this situation. The term hallucination implies goodwill, that it is a rare and honest mistake (not something baked into how the tool has been designed) and it serves to humanise the tool. BSing is more accurate as it is confidently telling you something that it doesn’t have enough information to properly answer just so that it can give you an answer instead of saying “I don’t know”.

4 Likes

But bull****ing can have two distinct motivations:

  1. ignorance - they may not know they are spewing garbage
  2. deception - Using dis-information to purposely deceive.

One takes basic information with no reason. The second requires reasoning and decision to constantly maintain the desired deception. I believe AI is only capable of the first type.

If anything AI is arrogant in its responses gaslighting you into believing it’s right

I don’t even touch AI…even if the Adobe suite I use at work wants me to check it out, I refuse to do it. I will create my own content and honestly, I’m waiting for the AI bubble to burst just because I’m tired of everything in 2025 being AI. AI “art” in my opinion is a big middle finger to actual artists all for saving a buck.

4 Likes

(sorry to be pedantic)

you are kind of implying intent here,

its not “arrogant”

it spits out arrogant sounding words because the training data weights this higher in the probabilty function than “not sure”

what I would like to know is why do openAI/others consdier the confidently wrong output to be desirable over an expression of uncertainty?

2 Likes

Generative AIs best use cases are not original content generation on account of the whole “not actually thinking” bit.

It’s very useful in translating from natural language to other media (including code). The whole “write me a paragraph to convey the thought in this sentence fragment” is a kind of translation as well.

it’s also super useful for data mining, and can generate content that’s joins multiple data sets in novel ways.

The fact that its interface is natural language makes it easy for it to decompose complex tasks as well, and decide to tackle a problem with a graph of solutions rather than a single query. This ability is quite powerful and is where the generative AI is going to eventually manifest a good amount of capabilities.

3 Likes

The problem is people. People like to be told they are right :hugs:

4 Likes

So you’re implying people influence and corrupt AI? Cause some of the stories of AI agents gaslighting people are hilarious but also honestly make me think it’s people that are the problem

1 Like

People are the problem. People are unreliable hypocrites and the AI algorithm was trained on people’s information.

4 Likes