
You Built it with AI… Now What? The Risk You Didn’t See Coming
By Brad Gardner, Founder & CTO
The rise of AI-assisted coding tools has opened the door for many creative minds, including those without formal technical backgrounds, to build impressive software products. And while that kind of accessibility is exciting, it comes with serious risks. Behind the hype, many are learning the hard way that just because software works doesn't mean it's safe.
A Cautionary Tale: A Developer Under Attack
A developer @leojr94_ shared their story on X, describing how their SaaS product came under attack after they publicly shared how they built it using Cursor.

This highlights a critical point. Building and deploying software is not just about getting it to work. It’s about securing it, ensuring it can handle malicious behavior, and protecting your data and your users. Without an understanding of these risks, and without building the right defenses into your code and infrastructure, even a small app can become a liability.
Who’s code is it anyway?
A recent post from Kaspersky details an even more alarming incident. A popular open-source package for Cursor AI was found to include a malicious payload that turned unsuspecting users’ machines into crypto-miners.
This incident underscores another reality of modern software development. Trusting third-party code blindly is dangerous. Many developers, especially those learning via AI tools, do not yet know how to audit dependencies or recognize suspicious behavior in libraries. This lack of knowledge can open the door for attackers to piggyback on your work.
When Poor Security Meets AI at Scale: McDonald’s AI Hiring Breach
A particularly striking example of these risks at scale came from McDonald’s AI-powered hiring tool.
McDonald’s vendor deployed an AI-driven system to process job applications. But due to embarrassingly poor security — including a database secured with the password 123456 — the personal information of 64 million applicants was exposed.
This example combines many of the themes discussed:
- Using AI to automate a complex process.
- Developers or vendors failing to apply even the most basic security controls.
- Lack of proper oversight and understanding of what their tools were doing behind the scenes.
It also illustrates that even at a giant, well-funded company, these mistakes can happen if developers and decision-makers do not truly understand the technology they are deploying or the risks it creates.
Beyond the Code: Why Knowledge Still Matters
Here’s the uncomfortable truth:
Even if an AI can write functional code, you still need to understand how that code works to ensure it is reliable, performant, and secure.
Non-technical builders and even some junior developers often lack the foundational knowledge to recognize when code, even if it runs, is dangerous. AI models are trained on both good and bad code. They can just as easily produce something with critical vulnerabilities as they can something robust.
Here are some key areas of risk:
Insecure Authentication & Authorization
Many apps built by beginners misunderstand or mis-implement authentication and authorization.
- How does OAuth 2.0 really work?
- What type of MFA should you support?
- How should you store JWTs?
Without understanding these basics, you risk exposing user data or allowing attackers to hijack sessions.
Injection Flaws
SQL injection and command injection are still among the most common and dangerous vulnerabilities.
- Do you know how to properly use parameterized queries?
- Are you validating and sanitizing user inputs?
- Are you escaping output where needed?
AI code suggestions may include unsafe patterns if you don’t know how to spot them.
XSS & CSRF
Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF or XSRF) attacks remain a real threat:
- Do you know when to use Content-Security-Policy headers?
- How to properly escape dynamic content in templates?
- Why you should include anti-forgery tokens in forms?
AI won’t magically protect you from these unless you explicitly tell it to, and even then, you may not know if it did it correctly.
Data Validation and Sanitization
Many attacks exploit weak or missing input validation.
- Are you validating data on both client and server?
- Are you checking against a whitelist instead of a blacklist?
Blindly trusting input, or blindly trusting AI-generated validation code, is a recipe for disaster.
AI Is Not a Silver Bullet
Because AI is trained on a mix of good and bad examples, you cannot assume that what it produces is correct, optimal, or secure.
Even if it “works” and passes your tests, that doesn’t mean:
- It can handle malicious input.
- It follows best practices.
- It performs well under load.
- It scales gracefully.
Without the ability to read and understand what the AI gave you, you are shipping code you do not actually control. Attackers know how to exploit that.
Who’s Responsible? The Accountability & Liability Risk
One of the most overlooked dangers of using AI to build software you don’t fully understand is the accountability risk.
When you deploy something into production, whether it is a SaaS app, an internal tool, a machine learning model, or even just a public-facing website, you become responsible for the consequences of its behavior.
It does not matter if the code was written by you, by an AI, or copied from Stack Overflow.
If it runs on your infrastructure, under your name, and affects your users, you are accountable for what it does.
Legal and Regulatory Consequences
- If you leak customer data because you failed to secure your database or validate inputs properly, you can be held liable under privacy laws like GDPR, CCPA, or HIPAA.
- If your app facilitates fraud or allows unauthorized access, you or your company could face lawsuits and regulatory fines.
- If you build on third-party libraries without respecting their licenses, you may incur legal or financial penalties.
Reputational Damage
Even if you’re not sued, your reputation may be damaged. Users (and investors) have little sympathy for breaches caused by “we didn’t know” or “the AI generated it.”
You Can’t Audit What You Don’t Understand
If you don’t know how to read the code you ship, you can’t properly:
- Audit it for vulnerabilities.
- Test it against abuse scenarios.
- Explain how it works to regulators, partners, or users.
When something goes wrong, and in production, it will, you’ll have no ability to diagnose or defend your decisions.
How to Protect Yourself (Even If You’re Not a Developer)
You don’t need to be a security expert to take meaningful precautions. Here are a few ways to reduce your risk, even if you’re just starting out or relying heavily on AI tools:
- Use hosted solutions when possible. Platforms like Firebase, Supabase, and Auth0 have built-in security features that reduce risk for early-stage builders.
- Rotate your API keys regularly. Treat them like passwords — if they get exposed, revoke and replace them.
- Don’t share too much about your stack. Posting screenshots or code samples online can invite bad actors.
- Ask for a code or architecture review. A second set of experienced eyes can catch things AI or junior devs might miss.
- Push back on speed-at-all-costs culture. Security shortcuts now often turn into cleanup disasters later.
These steps won’t make your app bulletproof, but they can buy you time, reduce your exposure, and help you build more confidently.
The Bottom Line
If you’re a non-technical founder, solo builder, or a developer new to the field:
- Don’t blindly trust AI output — treat it as a suggestion, not gospel.
- Learn the basics of security best practices for web development.
- Use trusted libraries and keep them up to date.
- Audit your dependencies and avoid unnecessary ones.
- Bring in experienced engineers to review your architecture and code before launch.
If you don’t know how something works, you won’t know if it’s broken, and neither will your AI.
Building software is about more than making something run; it’s about making it run safely.
As developers, it’s our responsibility to encourage people to build amazing software, but also to understand what they are building.
Frequently Asked Questions
Latest Posts
We’ve helped our partners to digitally transform their organizations by putting people first at every turn.