I spent the last two days writing a proof-of-concept app for work. The feature I used most? The ‘Sleep’ feature on Copilot.
I swear, AI is totally useless when writing new code. So much so that I wonder whether people claiming to ‘vibe code’ are lying, marketing for AI apps; or just making stupid websites? I cannot imagine trying to actually write an application using AI. Waste of my time.
I know very little about coding and I’ve made 5 apps so far. What I do is tell ChatGPT exactly what I want to make then I ask it to generate the perfect first prompt for Lovable.dev. I copy and paste the prompt chat gpt generates into Lovable.dev and presto I’ve got the start of an app. Then it usually takes somewhere between 10 to 50 more prompts to refine it and debug it.
PS. From what I’ve experienced Copilot is the worst AI of all of them.
I think lovable defaults to using Claude to write the code. I’ve found if you give it screenshots and description of what you want to change or how you want something to look it’ll nail it almost every time.
It hasn’t even been a week since the switch to AI First and I would have told you it’s been months.
My experience so far. I have a new project to create an automated maintenance system for ~80 EC2 hosts that handles load balancing during maintenance, system updates, testing and validation of the hosts, etc. I have to use AI to build the whole thing.
There have been some helpful parts where it sped some things up. There have also been many times in which I had to throw away all the AI code and do it myself and then explain what I did and why so the AI could finish the work for me so that I got credit for being a good corporate drone.
For the most part it behaves like when I ask my kids to clean their bedrooms. It’s done! I did so great and cleaned everything! (Room is a disaster) point out the stuff that isn’t picked up. 30 seconds later, I’m done!!! (Only did a fraction of what I said), repeat until my hair falls out.
Outdated I started here 16 years ago taking customer service calls as a holiday temp. Have worked a solid dozen jobs in the company before wandering into SDE. As it is I barely qualify as an SDE, software is like 20% of my time spent. Haven’t applied for an actual job or done interview loops since I started, just kinda wandered from job to job.
If I may ask; what do you call an ‘app’? Are you specifying the language and technologies being used? Or, do you just say ‘make me an app that does x, y and z?
And yes; I know Copilot sucks; but it is what I have to use for work.
I just say make an app that does x,y,z. If I plan on using it on a mobile device I say optimize it for mobile or identify what type of device the user has and adjust the app accordingly. If I need the app to save data I say set up a backend in Supabase and lovable builds and connects the backend automatically. These are all web apps so far,(html and python code I guess) though I have played around with pasting the code into Claude and asking it to translate everything into Swift so I can publish them on the Apple App Store and it works well.
so far I’ve made a temperature blanket app for my daughter that connects to a weather site api and collects temperatures and then generates a customizable temperature blanket preview. I made one that allows me to input pebble count data and then will generate graphs and tables and export to CSV or pdf. The reports it creates look far superior to the current Rivermorph software I use that cost me over $5,000. Another one that uses the USGS data base so I can pick a point on a topo map and it will define and calculate all the watershed and basin characteristics. I’ve been doing this by hand for nearly 15 years and the time savings is incredible. Another one that extracts property owner data from qpublic property reports and generates marketing letters and another one that is a clone of the ADP payroll app.
I’m sure to a real developer these are just basic apps and using AI can be annoying, but for someone who doesn’t know how to code this all feels revolutionary. I spent a year trying to learn python well enough to make stuff many years ago and I got nothing to show for it.
I’ve found that combining AI’s works the best. When one hits a wall or keeps repeating something I don’t want, I switch to another and usually get a good result. I’ve found ChatGPT in canvas mode is amazing for creating and directly editing documents as well as creating technical drawings, Gemini seems best at breaking down internet searches and generating lifelike images. Perplexity is best for anything financial. I use Grok when ChatGPT or Gemini hits a wall with editing images. I avoid Microsofts AI all together.
That is a really great point. I have started to wonder whether my experience as a developer is actually a hinderance to using AI for development. It is also interesting that you are making relatively simple apps using basic off the shelf function and splicing them together with others. In business that would normally not be acceptable nor desirable due to the relatively complex nature of enterprise apps. But for small business where security and ‘look and feel’ are not as important; these differences become unimportant. Arguably, this may allow you to leverage AI more effectively than a typical business user due to our corporate-mandated constraints.
The corporate constraints are a real killer for using it for sure despite the corporate demand haha.
We don’t use NPM we have our own package manager layered on top of it to control package imports. Have custom build systems for everything. Unique forks of all kinds of software. We have a laundry list of security packages we have to import and implement into anything we make. We have to integrate new stuff with old stuff. We finish an application, it goes through a 3 month security review that rips it apart. The AI isn’t trained on all these unique things either, and it doesn’t even have access to all the documentation.
Yesterday I went in circles a dozen times with the AI because it kept deciding that our build system wasn’t compatible with Node 22 because the code it added wouldn’t build, even though the package built with Node 22 fine before it added its broken code. I had this exact conversation with the ai yesterday
I am in the same boat as you. I have to use specific technologies; even fonts are defined for us in corporate ‘style-guides’ (hate, hate, hate…). But these same technologies are not understood by AI and often confuse the LLM when it encounters custom UI components or pipeline scripts that are full of technologies and libraries that it is unfamiliar with. This is where I think non-constrained people have an advantage. By starting with a knowledge-base that the AI is familiar with; the custom tweaks still stand on a relatively firm foundation. Meanwhile us business devs are stuck trying to prompt AI to modify my CI/CD pipeline scripts to update my deployment flags in a particular testing environment to meet requirements.
When I want to change the font or appearance in an app I show it a screenshot of an example of what I want and it copies it exactly. Probably not helpful for you but figured I’d throw it out there.
I was thinking about this a bit more last night in light of the further comments made. I wonder if this experience is why Wall Street. has such a different view of AI and coding than the professionals? A novice may see AI writing simple CRUD applications to save pictures and notes and think to themselves “gee if I can do this with AI; then surely s professional can use AI to write enterprise-level applications with far fewer developers. I can lay most of them off and save my company millions”. But as noted, this is a bit of an illusion. There are plenty of examples of simple CRUD applications written in any number of languages available for training AI. A 500+ view-page SPA that manages enrollment for a national healthcare company, however, is not something that AI can apply its learning to engineer. That takes humans with experience and an understanding of architecture and technology to build and maintain. Yes, AI can be used for certain aspects of this development, but the disconnect between the development community’s experience with AI and the novice experience seem to be miles apart in both the experience as well as the expectations.
That is where I think there is a disconnect. I see nothing to indicate that AI will ever be able to do reasoned software development. The novice thinks that this kind of AI development is just around the corner since ‘it has only been around for a few months”; and they are already amazed. What the novice does not understand is that AI is not using any logic or reasoning. This is fine for simple CRUD (CReate, Update, Delete) apps like saving a photo. But large scale apps take architecture, reasoning and understanding to build and maintain. As an experienced developer, I have seen zero evidence that AI can apply reason or logic to a problem unless prompted how to do it.