The app won’t be published and I won’t put SS# or EIN into it. As far as the accounting and reports, it’s matching ADP calculations and generating a better looking report. Like I said it’s for 4 people on salary and nothing ever changes with the payroll. Certainly don’t want to red flag myself for an audit though. Is filing your own payroll taxes an immediate alert for an audit?
Only if you fupk ![]()
Also like Frank said management would prefer AI does it whether we like it or not ![]()
It’s actually really funny we live in a world where writing your unit tests could get you laid off.
I agree this stuff is stupid. I will attempt to use as little AI as possible in the coming years but my thought process is:
The hard part of making code that needs to work is trying to specify the functionality of your code in a way that makes sense, and ensuring that your code at least somewhat follows some constraints to avoid system failure. To me, test code can be vital in ensuring that your code follows a certain plan, and if the AI garbage you use to generate your code does something wrong (big shocker), the mistake should show up on the tests!!!
To me, using AI to specify proper program behavior is like trying to use a dowsing rod to find water in the ground
I hate how buggy and unstable Inkscape is (I blame GTK personally… FSF have a knack for buggy software), but it lets you make SVGs with filters that can be displayed on the web, and it doesn’t cost a cent. Most other vector design programs do not base themselves around SVG (which in some ways can be a good thing) and tend to give you unexpected results when exporting, and definitely wouldn’t support filters at all.
At your size the chances of an audit aren’t even very high to be honest. I’m just noting my own fears. Like I said your use case is probably fine but it also depends. If the numbers look good it’s no issue if they look off that’s when the extra scrutiny will kick in. Auditors in my experience aren’t trying to get you on a technicality they are just trying to do their job like everyone else and are t going to look for major issues unless things smell like they might.
With that said I’ve worked at a firm that had a tip off of bad activity and was audited multiple times with extreme fine tooth comb because of an aggravated previous employee sending in anon tips
While I agree that unit tests are vital, they are still typically the least amount of overall work when generated by an AI. But also, by your logic, no piece of code should be generated by an AI because all code is vital. Something’s gotta give.
My plan is to do as little extra work with AI as possible, including prompting, while also avoiding the or else. Overall, unit tests are relatively small (if you’re doing them right), the code themselves as well as the prompt, thus, less effort and less to review.
AI garbage is AI garbage. Whether I’m asking it to generate pieces of core code or unit tests, I cannot simply just trust it to work without any doctoring. If I’m going to have to go back and validate that the garbage it spat out is not, in fact, garbage, I’m going to review as little as possible. If I could trust it 100% without having to review/doctor, I’d use it more often and on a wider scope
To reiterate, if I still have to review the generated code, I’m going to review as little as possible.
Perhaps I’m misinterpreting what you’re saying, but to me, it reads like you just let AI write your code without verifying, which is the complete opposite of what I’m saying.
To be extra clear, I am extremely, vehemently anti-AI, but I’m also pro-having-money, thus, the balancing act. If I could get away without using AI and without fear of repercussion, I would, but that simply isn’t my case, thus as little AI integration as possible
‘We are spending more than we can actually afford on this useless product whether you like it or not. Now use it in order to justify our spending or you’re fired!’
I couldn’t decide between
and
because you’re absolutely right. Someone high up drank the kool-aid and is now forcing us to use AI to justify being suckered
“We can’t afford that awesome logging software you wanted. Now here, go use the reason we can’t afford that awesome logging software you wanted to make looking at logs even harder on this crap logging software we’re forcing you to use.
We developers need to realize how much management hates us. The holy-grail of modern business is computer AI that writes bullet-proof software simply by speaking into a microphone.
Think about it. These guys think they are hot spit. They make tons of $. But they depend on a bunch of nerds, who cost a fortune, cannot make firm commitments on estimates, and have the power to shut down their business by simply misplacing a comma; or making a bad technical choice that cost them millions for reasons they cannot possibly comprehend. Add to that personal arrogance and bad wardrobe and we are simply not part of the ‘club’.
I would hate me too. I would like nothing more than to replace people like me with cheap, dependable, subservient AI agents that work 24 hours a day, deliver software with no bugs and give them nothing but affirmation that their designs and business decisions are amazingly insightful and brilliant.
This precedes AI by decades. Back in the mid 80’s the engineering firm I worked for bought a DEC VAX system for design analysis and data reduction in our testing facility. Previous to that we were using a Wang system with Basic programs on cassette tapes for the test data and buying time on a time share system for designs. My job was to transfer our programs from the Wang system to the VAX. It was pretty tedious as the new system could not read the cassette tapes and DEC Basic differed a bit from what was on the Wang system. At one point the lab manager asked me why it was taking so much time and said something to the effect that “It’s a computer, can’t you just tell it what to do?” I tried somewhat in vain to explain to him that that was what programming is.
It always has been, but LOL, there’s always going to need to be some kind of human attendance with programming background, and with how much output managers/C-Suite are going to be expecting, they aren’t going to be saving money on needy meat suits.
There’s more ways to fix this than one and some of them are significantly out of scope of this thread. That said, I do see your point. NASA almost lost a shuttle because of a misplaced decimal point. But the current iteration of AI is not going to be bug free because, at present, it still sources code made by humans which will inherently be prone to bugs
I just listened to an incident where a casinos AI facial recognition mistakenly identified someone as a previous trespasser, and when he presented his id to security they still arrested him on his way out because they thought it was suspicious he suddenly wanted to leave.
The cops get there, he presents his NV Real ID commercial drivers license, pay stubs, union card, vehicle registration and they still took him to jail under failure to identify, changed his booking papers from John Doe to his real name and let him go, but now he has a criminal record.
Because some guy who looked like him slept at the casino and got trespassed and the people with authority removed their brain and and let the machine take the wheel.
Scary stuff. His name is Jason Killinger, he has sued the casino and the police, the casino settled but the case against the officer is still ongoing.
“Their fancy software says it’s a 100% match, it says it’s legit, it’s legit” unacceptable, use your GD critical thinking skills, officer Doofy.
https://youtu.be/B9M4F_U1eEw?si=x9KkjRGrDGC25IxZ Occasionally there is some colorful language, but it’s mostly censored.
You’re forgetting that all code is prone to bugs. There is no way to make a computer that cannot make mistakes. Making mistakes is part of doing anything. Robots trained to emulate the behaviour of other robots are no less prone to error than robots trained on the behaviour of humans. People have a tendency to trust machines but they have no advantage over a human in respect to judgement. If you train to them on a dataset, they will inherent that dataset’s biases. The information they get has to come from somewhere, and if it’s not from flawed humans then it’s from other flawed machines—because all machines have flaws.
If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization.
This is all I remember from IT classes, that and turn it off and on again
I fear the closer you look at it the whole concept just appears silly. I get paid to trick rocks thousands of miles from me into performing complex tasks to keep the wheels of capitalism grinding. Of course sometimes the rocks misbehave.
I get paid to tell people the way they trick rocks is wrong and not secure enough.
I get paid to make sure those rocks still have tasks to perform ![]()
I get angry about the fancy thinking rocks making their way into my stupid cars
Who’s Al?











