I get that this isn’t a complicated app. I’m not arguing that. The point is that it generated code just as well as any human could for the purpose I needed it for in a fraction of the time it would have taken a human to create it. It would have taken more than 8 minutes to even communicate what I wanted to a developer. It would cost me thousands of dollars and I would have likely gotten a worse result since they would know nothing about stream restoration or the ACOE SQT. How long do you honestly think it would take you to create this “trivial app” from scratch?
Yeah I’m not arguing that a use case like yours doesn’t exist. But I can guarantee the platforms you used to make that app lost money on the tokens spent alone and that’s my biggest issue with AI is you used it but the ones buying and paying fir the infrastructure aren’t able to make that part profitable and it’s running off a cliff with its arms out in power and water usage.
I mean last week u was able to use copilot (hate it but us what it is) to transcribe a teams meeting and then convert the notes into project requirements. That would have been an hour of a PMs time spent doing that with probably worse results.
So yes AI has a use case but it’s all entry level stuff. Which also impacts the job market as now companies aren’t hiring entry level at the moment and it’s the hardest it’s been in generations for a college graduate to find work because of that.
The others are right. The forms you’re generating are more junior or entry level developer tasks. Still impressive still valuable but not what investors are trying to promise for way more complex environments.
Lovable is currently pulling in 16 million every month! They have invested around 500 million total. Almost all of their investment has been recouped at this point.
Just because the app I showed isn’t something super complicated doesn’t mean it’s not possible to use AI to make something complicated. Like I said it took 8 minutes to create it. If AI is currently able to replace junior developers, it’s only a matter of time before it comes after senior developers too.
I’d love to be a test case a try to develop something more complicated as a non developer using AI. Give me some ideas.
Here is another one. It’s more complex. A ruler that calibrates, creates graphs, etc.
It’s not a publicly traded company so we can’t know for sure but even though they have grown like wild fire and have a 6 billion valuation the math of credits they buy from open ai, anthropic, etc. the pricing they have (and free model) there’s no way they are actually profitable regardless of making 200 million a year they are likely burning double that on AI credits.
Like I said the math doesn’t ever math out. No one on the platform end is actually making profit they are just raising seed round after seed round from investors diluting sheres… lovable has raised over 6 billion but based on just buying the credits to operate (they don’t own an Ai model just operate ontop of others) they probably have burned most of that already…
Note open ai is the same thing massive growth and revenue yet also absurd debt, and spend that exceeds the revenue and even investor funds by billions.
COVID or physics ![]()
I think any efficiency gains just become performance gains and datacenters will continue to demand as much power as they can.
Both I guess. There was more push to try and find ways to shrink the package but Covid killed allot of the R&D and it’s been just bigger more power hungry ever sense. I don’t know enough to make a conjecture on if there is any smaller the package can realistically go but regardless it’s not happening at this point
This is only possible because of R&D though. We used to be physically limited by die size for a single GPU (about 800mm^2), now we have multi-die monsters thanks to high bandwidth and low latency interfaces.
That’s fair I don’t have enough knowledge in the material science and hardware side of things
Your pessimism is exhausting. Optimists have more fun and make all the money. You should give it a try.
What @Captrogers (Please correct me if I’m wrong) is saying is, unless investors companies can make AI profitable (for investors)*, enjoy making webservices with AI for free/cheap while you can
I imagine first they will go like YouTube and ad fill the platforms till you only see ads like a poorly made money grab mobile game. Every credit is worth 60 seconds of ad view or something absurd lol
They have already started trying stuff we can’t talk about on this platform
Whoa whoa whoa, let’s not give them ideas. I was only positing the end result and the consequences
I expect most AI advertising to me more insidious and invasive.
For non programming things, ChatGPT I have these symptoms, what’s wrong with me? Pfizer paid to influence results so you should ask your doctor about chatrizone.
For programming things. Great question, I’ve analyzed all the options and it looks like this expensive AWS service is your best choice to build that tool!
etc.
I’m speaking about his general doom and gloom pessimism. AI bubble collapse, crypto collapse, power grid collapse, general everything collapse, etc. It’s been a theme for what seems like years now. Like I said it’s exhausting. Reminds more when I was a pepper about a decade ago. I sure don’t miss all that anxiety.
gestures broadly
That is not the point.
How long do you think you and AI would take to create a national-health plan enrollment application to cater to national and regional customers with plans ranging from employer, medicaid, medicare, federal, state and legacy capabilities with over 270 unique views and unique business logic for each state and region all done with a specific style-guide that specifies your user interface design to the pixel? Oh, did I mention the RESTful back-end that has over 500 unique endpoints and a database backing each table and fronted by a BFF (back end for front end)? Paired with a CI/CD pipeline that executes our unit tests and deploys the resulting build to multiple environments all with unique configurations to simulate levels of available runtime environment to support manual end-to-end testing?
The leap from an ad-hoc bolting together of basic elements to something like an enterprise application is immense. It is not simply going from a few hundred lines of code to hundreds of thousands. It is engineering the underlying data structures and technologies to support the ongoing development process so that new features can be added all while supporting the old ones. Over time, this can become quite complex as well as a bit of a juggling act with respect to best practices and practicality.
Good question. Here is the plan chat got came up with. Let’s see what happens.
Below is a single, “paste-and-run” prompt you can drop into Lovable that forces it to (1) lock a pixel-precise design system, (2) model state/region-specific enrollment logic at scale, (3) generate a BFF + REST API + database, and (4) wire CI/CD with multi-environment configs for manual E2E testing—without pretending it can implement all 270 views and 500 endpoints in one shot. It will produce an executable scaffold plus a repeatable expansion plan that scales.
Copy/paste prompt for Lovable
Build a production-grade National Health Plan Enrollment web application and supporting platform with the architecture and constraints below. Deliver the solution as an incremental, shippable scaffold that can scale to 270+ unique UI views and 500+ REST endpoints across federal/state/legacy programs. Do not hand-wave: if something is not fully implemented, generate the framework, patterns, and a backlog with clear acceptance criteria so the team can expand deterministically.
1) Product scope and user types
Create an enrollment application that supports national and regional customers and these plan categories:
-
Employer-sponsored
-
Medicaid
-
Medicare
-
Federal marketplace
-
State marketplace
-
Legacy plans (state/region-specific programs and grandfathered offerings)
User roles:
-
Applicant (individual/family)
-
Broker/Assister
-
Employer admin (for employer plans)
-
Caseworker (state programs)
-
Customer support agent
-
System admin
Core journeys (must exist end-to-end in the scaffold):
-
Account creation + identity + consent
-
Eligibility pre-check
-
Application creation (household, income, residency, immigration/citizenship attestation, etc.)
-
Plan shopping + comparison
-
Enrollment submission
-
Document upload + verification tasks
-
Status tracking + communications
-
Admin/case management dashboard (lightweight MVP)
2) Configuration-driven variability at national/state/region levels
The application must support per-state and per-region rules with unique business logic and UI differences. Implement a rules/configuration engine that can:
-
Load a base (national) configuration
-
Override by state
-
Override by region within a state
-
Override by plan program (Medicaid/Medicare/etc.)
-
Override by feature flags and effective dates
Deliver:
-
A clear configuration schema (JSON/YAML) for UI routes, field requirements, validations, copy, documents, eligibility rules, and integrations.
-
A rules execution layer (e.g., policy evaluation module) with unit tests and examples for at least 3 representative states and 2 regions each.
-
A pattern that scales to 50 states without code forks.
3) UI requirements: pixel-precise style guide enforcement
Assume a strict style guide exists that specifies UI to the pixel. Implement:
-
A design token system (spacing, typography, color, radii, shadows, z-index, breakpoints).
-
A component library (buttons, inputs, selects, stepper, cards, alerts, modals, tables, accordions, pagination, toast, etc.) that consumes tokens.
-
A layout grid and responsive rules.
-
A linting/verification approach to reduce drift (e.g., Storybook snapshots or visual regression hooks—scaffold is fine).
If the style guide content is not provided, create a placeholder style guide with tokens/components AND generate a checklist of what to replace once the real guide is supplied. Do not invent brand assets; use neutral defaults.
4) Application scale: 270+ views
Do NOT attempt to manually create 270 bespoke pages. Instead:
-
Implement a route/view generation framework where “views” are composed from reusable templates and config.
-
Provide:
-
At least 15 fully implemented representative views spanning the main journeys (auth, eligibility, application steps, plan shopping, enrollment submit, document upload, status, admin dashboard).
-
A view registry and code generation or composition strategy that can scale to 270+.
-
A sample “state override” that changes a view’s fields, validation, and content.
-
5) Backend architecture: BFF + REST API (500+ endpoints pattern)
Create a platform with:
-
BFF (Backend for Frontend): tailored endpoints for the UI, aggregating downstream calls, shaping payloads, handling caching, retries, and error normalization.
-
RESTful core services/API with resource-oriented design and versioning.
-
Authentication/authorization (RBAC) and audit logging.
-
OpenAPI specs generated for both BFF and core API.
Do NOT generate 500 endpoints explicitly. Instead:
-
Implement core resource domains and patterns that plausibly expand to 500 endpoints:
-
Users, Roles, Sessions
-
Applications, Households, Members
-
Eligibility determinations
-
Plans, Quotes, Providers
-
Enrollments
-
Documents, Verification tasks
-
Notifications
-
Employers (for employer plans)
-
Admin case management
-
Reference data (states/regions/programs)
-
-
For each domain: implement a minimal set of CRUD + workflow endpoints plus pagination/filtering, and show how additional endpoints would be added consistently.
-
Include:
-
Example of state-specific workflow endpoint (e.g., “/states/{state}/eligibility/run”).
-
Idempotency keys for submission endpoints.
-
Standard error model, correlation IDs, request logging.
-
6) Database and data model
Implement a relational database model with:
-
Clear entities for the above domains
-
Migrations
-
Seed data for the representative states/regions and sample plans
-
Data access layer with transaction boundaries
-
Audit tables or event log for critical actions
7) CI/CD and environments for manual E2E testing
Create a CI/CD pipeline that:
-
Runs unit tests (frontend + backend)
-
Runs linting and type checks
-
Builds artifacts
-
Deploys to multiple environments (e.g., dev, qa, staging, perf) each with unique configuration:
-
Feature flags
-
Mock vs real integrations
-
Rate limits / throttling simulation
-
Reduced resources to simulate constrained runtime
-
Different state/region rulesets enabled
-
-
Produces environment-specific build outputs and config bundles.
If you cannot actually deploy in this environment, generate:
-
Pipeline definitions (GitHub Actions or GitLab CI)
-
Dockerfiles / docker-compose for local + multi-env simulation
-
Helm/Kustomize skeleton (if using Kubernetes)
-
A documented “runbook” for manual E2E testers.
8) Non-functional requirements
-
Security: OWASP basics, input validation, CSRF, secure cookies, secrets management, least privilege
-
Observability: structured logs, metrics, tracing hooks, health checks
-
Performance: caching strategy in BFF, pagination, async processing for heavy tasks
-
Accessibility: WCAG-minded components and form errors
-
Internationalization-ready (even if only en-US strings included)
9) Deliverables required in your output
Produce:
-
A repo structure (frontend, bff, api, db, shared)
-
Tech stack choices (justify briefly)
-
Running instructions (local dev) with docker-compose
-
OpenAPI specs
-
Config/rules schema + examples for multiple states/regions
-
15 representative implemented views + generation strategy for the rest
-
CI/CD pipeline files + environment configuration layout
-
A prioritized backlog to expand to 270 views / 500 endpoints with acceptance criteria and a naming convention
10) Constraints and realism
-
Be explicit about what is scaffolded vs fully implemented.
-
Prefer configuration-driven and reusable patterns over one-off code.
-
Keep naming conventions consistent.
-
Use defensive coding and tests around the rules engine and critical submission workflows.
Start now by generating the full scaffold and the first working vertical slice: user signup → eligibility pre-check → application creation → plan shopping (mock data acceptable) → enrollment submission → status page, with state/region override demonstrated in at least one step.
Right, but now do this without having been told what is needed by an engineer. Companies aren’t going to be paying for engineers! They’re way too expensive. All they’re going to want is someone who can type reasonably well and is willing to accept beer (may or may not be cool) and pizza (one slice cut 24 ways, no seconds)
Here is what one single prompt generated. Total time invested 45 seconds. I think if I had all the plans for every state available I could build what you’re asking. That is the point! I know nothing about coding and I could probably get it done.










