From Developer to Security Orchestrator: The Role Every Developer Must Grow Into in 2026

From Developer to Security Orchestrator: The Role Every Developer Must Grow Into in 2026

|By Malik Saqib13 min

There's a common assumption in software development that security is someone else's job. The security team handles that. The DevSecOps engineers handle that. The compliance team handles that. You just need to ship features.

That assumption is being dismantled — not by ideology, but by numbers.

Supply chain attacks surged 742% in three years. 82% of containerized applications suffered a breach in 2025. 95% of DevSecOps leaders now expect AI-driven security automation to be critical to their delivery pipeline. And 41% of all production code is AI-generated — code that, studies show, carries a 24.7% chance of containing a security flaw.

The math no longer permits the separation of "developer" and "security-aware developer." In 2026, they are the same role. The question is not whether you'll be expected to think about security. The question is whether you'll be equipped to.

This guide is about getting equipped.

The Shift That Changed Everything

For years, security was genuinely a separate discipline. The code was written, then security scanned it, then vulnerabilities were filed as tickets, then developers eventually fixed them — weeks or months later, after the context had evaporated. This was frustrating, slow, and produced mediocre security outcomes. But it was at least a coherent model.

Two forces broke it irreparably.

First: the speed of AI-assisted development. When developers are shipping features in hours instead of days, a security review cycle that operates on a weekly cadence is not late — it's absent. The code has been deployed, used, and extended by the time security sees it.

Second: the expansion of the attack surface. Cloud-native architectures, containerized microservices, AI integrations, third-party APIs, open-source dependencies — every layer of the modern stack is a potential entry point. The old model of "secure the perimeter" doesn't map onto a system with no perimeter.

By 2026, DevSecOps is no longer perceived as an engineering best practice — it's considered a business-critical operating model. Companies that continue to regard security as a downstream control are defenseless, sluggish, and increasingly non-compliant.

The response the industry has converged on is not "hire more security engineers." There simply aren't enough. The response is to distribute security responsibility to the people closest to the code — the developers — and give them the tools, workflows, and knowledge to act on it.

That is what a Security Orchestrator does. And it is increasingly what a developer does.

💡 What Is a Security Orchestrator?

A Security Orchestrator isn't a dedicated security engineer. It's a developer who integrates automated security tooling into their workflow, understands where trust boundaries live in their architecture, reviews AI-generated code for vulnerabilities, and treats security as a first-class design constraint — not an afterthought. The "orchestration" is coordinating automated tools, human review, and architectural decisions into a coherent security posture.

Why "Shift Left" Became "Shift Smart"

The "shift left" movement — embedding security earlier in the development process — dominated DevSecOps conversation for several years. The principle was correct: it costs 30× less to fix a vulnerability in development than in production. But the execution produced a different problem.

Shifting left was a good start. Shifting smart is the necessary next step. The goal is to stop flooding developers with low-impact alerts. Security feedback must be intelligent, contextual, and actionable directly in the developer's workspace.

The failure mode of naive shift-left was alert fatigue. Static analysis tools flagged hundreds of issues. Most were low severity. Most were false positives. Developers learned to ignore them. The few critical findings drowned in the noise.

Shift smart fixes this with three changes:

Intelligent triage. AI-powered tools now rank vulnerabilities by exploitability and actual business impact — not just CVSS score. A vulnerability in your authentication flow gets different attention than a vulnerability in a rate-limited public API endpoint.

Contextual feedback. Instead of a scan report delivered separately, security issues appear inline in the IDE at the moment of writing. The developer sees the problem while the context is fresh — not three weeks later in a Jira ticket.

Automated remediation suggestions. Modern tools don't just report the problem. They suggest the fix — often generating the corrected code directly in the editor.

This is the world you're operating in as a 2026 developer. The tools are better. The expectation is higher. And the knowledge to use both effectively is now your responsibility.

The Security Orchestrator's Stack

You don't need to become a full-time security engineer. You need to understand and operate a security toolchain that runs mostly automatically — and know how to interpret and act on its output.

Here is the practical stack, layer by layer.

Layer 1: IDE Security (Zero Friction)

Security that requires context-switching gets ignored. Security that lives in your editor gets used.

GitHub Advanced Security / Copilot Autofix — detects common vulnerability patterns as you type and suggests fixes inline. For Next.js and Node.js developers, it catches SQL/NoSQL injection, insecure deserialization, and hardcoded credentials with reasonable accuracy.

Snyk IDE plugin — real-time scanning of open-source dependencies as you add them. When you npm install, Snyk shows you immediately whether the package has known vulnerabilities, what the fix version is, and whether the vulnerability is actually exploitable in your usage pattern.

// Snyk catches this pattern immediately
const userInput = req.params.id;
const user = await User.findOne({ _id: userInput }); // ⚠️ NoSQL injection risk
 
// Suggests this fix
const { id } = req.params;
if (!mongoose.Types.ObjectId.isValid(id)) {
  return res.status(400).json({ error: 'Invalid ID format' });
}
const user = await User.findOne({ _id: new mongoose.Types.ObjectId(id) });

Layer 2: Pre-commit Hooks (Catch Before Push)

The cheapest place to fix a security issue is before it enters the repository at all.

# Install Husky for git hooks
npm install --save-dev husky
npx husky init
 
# .husky/pre-commit
#!/usr/bin/env sh
. "$(dirname -- "$0")/_/husky.sh"
 
# Block commits with secrets
npx gitleaks detect --source . --exit-code 1
 
# Run security-focused linting
npm run lint:security
 
# Quick dependency audit
npm audit --audit-level=high
// .eslintrc.js — security-focused rules
module.exports = {
  plugins: ['security', 'no-unsanitized'],
  extends: ['plugin:security/recommended'],
  rules: {
    // Prevent eval() and variants
    'no-eval':                    'error',
    'security/detect-eval-with-expression': 'error',
    // Prevent unsafe regex (ReDoS attacks)
    'security/detect-unsafe-regex': 'error',
    // Flag hardcoded credentials
    'security/detect-possible-timing-attacks': 'warn',
    // Prevent prototype pollution
    'security/detect-object-injection': 'warn',
    // Prevent unsanitized innerHTML
    'no-unsanitized/method':      'error',
    'no-unsanitized/property':    'error',
  },
};

Layer 3: CI/CD Pipeline Security (Automated Gate)

In 2026, there is a surge in automated security testing tools integrated directly into CI/CD pipelines — performing vulnerability scans, code analysis, and compliance checks automatically, providing developers with real-time feedback on potential security issues.

This is the security gate that catches what slips past the IDE and pre-commit hooks. It runs on every pull request, automatically.

# .github/workflows/security.yml
name: Security Pipeline
 
on: [push, pull_request]
 
jobs:
  security:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0  # Full history for secret scanning
 
      # ── SAST: Static Application Security Testing ──
      - name: Run Semgrep SAST
        uses: semgrep/semgrep-action@v1
        with:
          config: >-
            p/nodejs
            p/react
            p/jwt
            p/owasp-top-ten
        env:
          SEMGREP_APP_TOKEN: ${{ secrets.SEMGREP_APP_TOKEN }}
 
      # ── SCA: Software Composition Analysis ──
      - name: Snyk dependency scan
        uses: snyk/actions/node@master
        with:
          args: --severity-threshold=high --fail-on=upgradable
        env:
          SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
 
      # ── Secret Detection ──
      - name: Gitleaks secret scan
        uses: gitleaks/gitleaks-action@v2
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
 
      # ── Container Security (if applicable) ──
      - name: Trivy container scan
        uses: aquasecurity/trivy-action@master
        with:
          scan-type: fs
          scan-ref: '.'
          severity: 'CRITICAL,HIGH'
          exit-code: '1'
 
      # ── DAST: Dynamic security test against staging ──
      - name: OWASP ZAP baseline scan
        uses: zaproxy/action-baseline@v0.10.0
        with:
          target: ${{ secrets.STAGING_URL }}
          fail_action: warn  # warn on baseline, fail on full scan

Layer 4: Runtime Security (Production Eyes)

Catching vulnerabilities before deployment is the goal. Catching exploits in production is the safety net.

For Next.js deployments, the critical runtime controls are:

// next.config.js — security headers as code
const securityHeaders = [
  {
    key: 'X-DNS-Prefetch-Control',
    value: 'on',
  },
  {
    key: 'Strict-Transport-Security',
    value: 'max-age=63072000; includeSubDomains; preload',
  },
  {
    key: 'X-Frame-Options',
    value: 'SAMEORIGIN',
  },
  {
    key: 'X-Content-Type-Options',
    value: 'nosniff',
  },
  {
    key: 'Referrer-Policy',
    value: 'origin-when-cross-origin',
  },
  {
    key: 'Permissions-Policy',
    value: 'camera=(), microphone=(), geolocation=()',
  },
  {
    key: 'Content-Security-Policy',
    value: [
      "default-src 'self'",
      "script-src 'self' 'unsafe-inline'",  // tighten per environment
      "style-src 'self' 'unsafe-inline'",
      "img-src 'self' data: blob:",
      "font-src 'self'",
      "connect-src 'self' https://api.yourservice.com",
      "frame-ancestors 'none'",
    ].join('; '),
  },
];
 
module.exports = {
  async headers() {
    return [
      {
        source: '/(.*)',
        headers: securityHeaders,
      },
    ];
  },
};

The OWASP Top 10 for MERN/Next.js Developers

The OWASP Top 10 is the canonical list of application security risks. As a MERN or Next.js developer, each entry has a direct translation into your daily code. Here's the practical version — what it means in your stack and how to prevent it.

1. Broken Access Control

The most common vulnerability in production applications in 2026. Almost always caused by the same pattern: trusting client-provided data to make authorization decisions.

// ❌ Broken: authorization based on client-supplied role
// app/api/admin/users/route.js
export async function GET(request) {
  const { role } = await request.json(); // attacker sends { role: "ADMIN" }
  if (role === 'ADMIN') {
    return NextResponse.json(await getAllUsers());
  }
  return NextResponse.json({ error: 'Forbidden' }, { status: 403 });
}
 
// ✅ Secure: authorization based on server-verified session
export async function GET(request) {
  const session = await getServerSession(authOptions);
 
  if (!session || session.user.role !== 'ADMIN') {
    return NextResponse.json({ error: 'Forbidden' }, { status: 403 });
  }
 
  return NextResponse.json(await getAllUsers());
}

2. Cryptographic Failures

Sensitive data exposed because it wasn't encrypted in transit or at rest — or because it was encrypted with a weak algorithm.

// ❌ Storing passwords in plain text or with MD5/SHA1
const user = await User.create({ email, password: req.body.password });
 
// ✅ Always hash passwords with bcrypt (cost factor 12+)
import bcrypt from 'bcryptjs';
 
const saltRounds = 12; // never below 10
const hashedPassword = await bcrypt.hash(password, saltRounds);
const user = await User.create({ email, password: hashedPassword });
 
// ✅ Never return password field in API responses
const user = await User.findOne({ email })
  .select('-password -__v'); // explicitly exclude sensitive fields

3. Injection (NoSQL / SQL / Command)

AI-generated code is particularly prone to this. The model generates working code — but doesn't always sanitize user inputs before using them in queries.

// ❌ NoSQL injection — user controls query operators
const { username } = req.body;
// If username = { $gt: "" }, this returns ALL users
const user = await User.findOne({ username });
 
// ✅ Validate input type and sanitize before querying
import { z } from 'zod';
 
const schema = z.object({
  username: z.string().min(1).max(50).regex(/^[a-zA-Z0-9_]+$/),
});
 
const parsed = schema.safeParse(req.body);
if (!parsed.success) {
  return res.status(400).json({ error: 'Invalid input' });
}
 
// Now safe — username is guaranteed to be a plain string
const user = await User.findOne({ username: parsed.data.username });

4. Insecure Design

This is the architectural vulnerability that no scanner will catch — because it's not in the code, it's in the decisions made before the code was written.

Examples in MERN/Next.js context:

- Business logic on the client (price calculations, discount 
  eligibility, access tier checks) that should be server-side
  
- API endpoints that expose more data than the frontend needs
  (returning full user documents when only name + avatar are required)
  
- Missing rate limiting on authentication endpoints — 
  brute-force attacks become trivially easy
  
- No audit logging — when a breach occurs, you have 
  no way to determine what was accessed or when
// ✅ Rate limiting on auth routes — express-rate-limit
import rateLimit from 'express-rate-limit';
 
const authLimiter = rateLimit({
  windowMs: 15 * 60 * 1000, // 15 minutes
  max: 10,                   // 10 attempts per window
  message: { error: 'Too many attempts. Try again in 15 minutes.' },
  standardHeaders: true,
  legacyHeaders: false,
  // Store in Redis for multi-instance deployments
  store: new RedisStore({ client: redisClient }),
});
 
app.use('/api/auth/', authLimiter);

5. Security Misconfiguration

The most common cause of production breaches that never should have happened. Default configurations, debug modes left on, environment variables exposed, CORS set to *.

// ❌ CORS wildcard in production
app.use(cors({ origin: '*' }));
 
// ✅ Explicit allowed origins by environment
const allowedOrigins = process.env.NODE_ENV === 'production'
  ? ['https://yourdomain.com', 'https://www.yourdomain.com']
  : ['http://localhost:3000'];
 
app.use(cors({
  origin: (origin, callback) => {
    if (!origin || allowedOrigins.includes(origin)) {
      callback(null, true);
    } else {
      callback(new Error('CORS policy violation'));
    }
  },
  credentials: true,
  methods: ['GET', 'POST', 'PUT', 'DELETE'],
}));
# .env.example — document what's needed, never commit real values
DATABASE_URL=postgresql://user:password@host:5432/dbname
NEXTAUTH_SECRET=generate-with-openssl-rand-base64-32
NEXTAUTH_URL=https://yourdomain.com
JWT_SECRET=minimum-32-chars-random-string
ALLOWED_ORIGINS=https://yourdomain.com
 
# Production checklist
# [ ] NODE_ENV=production (disables stack traces in responses)
# [ ] Debug logging disabled
# [ ] All secrets rotated from development values
# [ ] Database not publicly accessible (VPC/private subnet)

⚠️ The AI Code Security Reality:

Studies in 2025 found that roughly 24.7% of AI-generated code contains a security flaw. Even the best models in 2026 — Claude 4, GPT-5, Gemini 2.5 Pro — still generate insecure defaults, hallucinate security checks, and occasionally put critical validation logic in the wrong layer. The two-stage review pattern (build, then audit) remains essential for any security-sensitive code path.

Securing AI-Generated Code: The Two-Stage Review Pattern

Because AI tools now write a significant portion of production code, and because that code has measurably higher security defect rates than carefully reviewed human code, every serious developer workflow in 2026 needs a systematic way to audit AI output.

The most effective approach is the two-stage review in the same session:

Stage 1 — Build
Prompt: "Build [feature] following our conventions in CLAUDE.md."
Review: Does it work? Does it fit the architecture?

Stage 2 — Security Audit (same session, role switch)
Prompt: "Now act as a security engineer performing a code review.
Examine the code you just wrote and specifically look for:

1. Authorization checks in the wrong layer (client vs server)
2. User inputs used in queries without validation/sanitization
3. Sensitive data exposed in API responses
4. Missing rate limiting on sensitive endpoints
5. Hardcoded secrets or credentials
6. CORS or CSP misconfigurations
7. Missing error handling that could leak stack traces

For each issue found: explain the attack vector, show the
vulnerable code, and provide the fixed version."

Review: Is the hardened version functionally equivalent?
        Does it handle edge cases the original missed?

This adds roughly 3–5 minutes to any AI-assisted feature build. It catches the majority of common vulnerabilities before they reach code review — and it produces significantly better output than relying on post-deployment scanning.

🛡️ The Security Orchestrator Workflow

  • 1. Design with trust boundaries first: Before writing code, map where user-controlled data enters your system, where it crosses trust boundaries, and what happens if validation fails at each point.
  • 2. Use IDE security plugins as your first line: Snyk, GitHub Advanced Security, or Semgrep in the editor. Zero context-switching, maximum uptake.
  • 3. Pre-commit hooks as a safety net: Secret detection (Gitleaks) and high-severity dependency scanning before anything hits the repository.
  • 4. CI/CD pipeline as the automated gate: SAST + SCA + container scanning on every pull request. Fail fast on critical findings.
  • 5. Two-stage AI code review: For every significant AI-generated code block — build first, then security audit in the same session.
  • 6. Security headers as code: CSP, HSTS, X-Frame-Options in next.config.js. Versioned, reviewed, deployed automatically.
  • 7. Dependency hygiene as a weekly habit: Dependabot auto-PRs + a weekly npm audit review. Outdated dependencies are the most common source of known-exploited vulnerabilities.

The Career Case: Why This Matters for Your Salary

Beyond the moral and professional obligation to ship secure software, the economic case for security skills is now unambiguous.

DevSecOps for AI Pipelines roles are paying $150,000–$210,000 in 2026 — integrating security into the MLOps lifecycle. As "Shift Left" moves to AI, these engineers are critical for automated model scanning and secure CI/CD for ML workflows.

The premium is real and it compounds. A developer who ships features is valuable. A developer who ships secure features, builds pipelines that catch vulnerabilities automatically, and can reason about the security implications of architectural decisions is significantly more valuable — and increasingly difficult to hire.

The talent or workforce trend that will define competitive advantage in AppSec in 2026 is the ability to effectively leverage AI and machine learning security capabilities. Professionals who can develop and implement AI models and algorithms, as well as secure AI systems, will be in high demand.

The four specific skills that command the largest premium for MERN and Next.js developers:

Security automation — building and maintaining the CI/CD security pipeline. Understanding SAST, SCA, DAST, and what they catch and miss.

AI code auditing — systematic review of AI-generated code for security flaws. The ability to use AI to audit AI is particularly valued.

Architecture-level security thinking — zero trust principles, trust boundary design, threat modeling before code is written.

Dependency and supply chain hygiene — SBOM (Software Bill of Materials) generation, dependency provenance verification, and vulnerability response processes.

Building Your Security Orchestrator CLAUDE.md

If you're using Claude Code or Cursor for your MERN/Next.js projects, your project context file should encode your security conventions explicitly. AI agents that know your security rules produce dramatically more secure code by default.

# Security Conventions (CLAUDE.md section)
 
## Authentication & Authorization
- Auth checks: server-side only — NEVER trust client-provided role or id
- Use getServerSession(authOptions) in all server-side protected routes
- Session tokens: never log, never include in API responses
- JWT: verify signature on every request, check expiry, check role
 
## Input Validation
- ALL user inputs validated with Zod before any DB operation
- Never use req.body/req.params directly in DB queries
- ObjectId validation: mongoose.Types.ObjectId.isValid() before findById
- File uploads: validate type, size, and scan for malware
 
## Database
- Always use .select() to explicitly project fields
- Never return password, __v, or internal fields in responses
- Use .lean() for read-only queries
- Parameterized queries only — no string concatenation in queries
 
## API Design
- Rate limiting on ALL auth endpoints (10 req / 15 min)
- Rate limiting on expensive operations (AI calls, file uploads)
- Error messages: never expose stack traces or internal paths
- CORS: explicit origin whitelist only, never wildcard in production
 
## Secrets
- All secrets in environment variables, never hardcoded
- Never log secrets, tokens, or passwords
- Rotate secrets on any suspected compromise immediately
 
## Security Review Trigger
When generating any code that touches: auth, payments, file upload,
user data, admin functions, or external API calls — automatically
perform a security audit pass before presenting the final code.

With this in your CLAUDE.md, every AI interaction in your project inherits these constraints. The model will write code that follows them by default — and flag when a prompt is asking it to violate them.

Common Mistakes to Avoid

⚠️ Security Orchestrator Anti-Patterns:

Alert Fatigue by Design

Running every possible security scanner and leaving all findings enabled produces hundreds of low-value alerts per week. Developers learn to dismiss them. Configure your tools to surface only actionable, high-confidence findings. Tune aggressively. A scanner that cries wolf is worse than no scanner.

Security as a PR Blocker (Without Context)

Failing every build on any security finding creates developer friction that eventually gets the security tooling disabled or bypassed entirely. Grade findings by severity and exploitability. Block on critical. Warn on high. Report on medium. This nuance determines whether security tooling gets embraced or circumvented.

Dependency Neglect

The most exploited vulnerabilities in production in 2025 were not zero-days — they were known CVEs in outdated dependencies that had patches available for months. Enable Dependabot. Review its PRs. Merge security updates within 48 hours of release for critical/high severity findings.

Treating Security Headers as Optional

Content Security Policy, HSTS, and X-Frame-Options prevent entire classes of client-side attacks. They take 30 minutes to configure correctly once and protect every page indefinitely. There is no good reason to skip them — yet most applications in production are missing at least one.

No Incident Response Plan

Most developers think about security in terms of prevention. Prevention fails eventually. When it does, having a documented process for rotating secrets, revoking tokens, notifying users, and assessing breach scope is the difference between a contained incident and a catastrophic one. Write it before you need it.

The Practical Checklist: Ship It Securely

Before any feature goes to production, run through this list. It covers the scenarios that produce the majority of real-world breaches in MERN and Next.js applications:

AUTHENTICATION & AUTHORIZATION
□ All auth checks are server-side — no client-controlled roles
□ Protected API routes verify session before processing request
□ Passwords hashed with bcrypt (cost ≥ 12)
□ JWT secret is strong (≥ 32 random chars) and rotated periodically
□ Rate limiting on login, register, and password reset endpoints

INPUT & DATA HANDLING
□ All user inputs validated with Zod (type, length, format)
□ MongoDB queries use validated types — not raw req.body
□ File uploads: type validation + size limits + no executable extensions
□ API responses exclude sensitive fields (_password, tokens, internal IDs)

INFRASTRUCTURE
□ Security headers set in next.config.js (CSP, HSTS, X-Frame-Options)
□ CORS: explicit origin whitelist — no wildcard in production
□ Environment variables: no secrets hardcoded or committed
□ NODE_ENV=production (disables error stack traces in responses)
□ Database not publicly accessible (private network / VPC)

DEPENDENCIES & SUPPLY CHAIN  
□ npm audit passes with no high/critical vulnerabilities
□ Dependabot enabled with auto-merge for patch updates
□ No abandoned packages (last publish > 2 years ago) in critical paths
□ SBOM generated for compliance-sensitive deployments

CI/CD PIPELINE
□ SAST scanner (Semgrep) runs on every PR
□ SCA scanner (Snyk) runs on every PR
□ Secret detection (Gitleaks) runs on every commit
□ Container scan (Trivy) runs on every image build
□ DAST scan (ZAP) runs against staging on main branch merges

Conclusion: The Developer Who Ships Security

The transition from developer to security orchestrator isn't a career pivot. It's a natural extension of what it means to be a professional developer in 2026.

Successful DevSecOps teams in 2026 embed secure-by-default practices across every layer of development — using hardened templates, trusted components, automated policy enforcement, and pre-configured security guardrails.

The tools are better than they've ever been. IDE plugins that catch vulnerabilities as you type. AI assistants that can audit their own output when prompted correctly. CI/CD scanners that triage findings by exploitability rather than generating noise. Security has never been more accessible to developers who choose to engage with it.

What requires your judgment — and what no tool will replace — is the architectural thinking. Knowing where trust boundaries belong. Designing APIs that expose minimum necessary data. Choosing the right validation layer. Building rate limiting before you need it rather than after a breach teaches you to.

The future of DevSecOps is collaborative, intelligent, and defined by code. Your role must change with it. Developers need to use tools that bring security into their workflow. DevSecOps professionals must become expert automators.

The developer who can orchestrate that combination — automated tooling, AI-assisted review, and architectural security thinking — is not just more hireable. They're building software that will still be standing in three years. In a landscape where 82% of containerized applications suffered a breach last year, that is not a small thing.

Start with your CLAUDE.md. Add the security conventions section. Enable Dependabot. Add Gitleaks to your pre-commit hook. Drop the CI/CD security pipeline into your next project. Each addition takes less than an hour. Together, they transform your security posture — and your professional profile.

For more practical guides on building production-grade, security-first MERN and Next.js applications, visit ItsEzCode and explore the complete library.

Tools & Resources

  • Snyk — Developer-first dependency and code security
  • Semgrep — Fast, open-source SAST for modern stacks
  • Gitleaks — Secret detection for git repositories
  • OWASP ZAP — Open-source DAST scanner
  • Trivy — Container and filesystem vulnerability scanner
  • OWASP Top 10 — The canonical application security risk list
  • Practical DevSecOps — Hands-on DevSecOps training and certification

Last updated: March 2026

Security tooling evolves rapidly — verify tool versions and CVE databases are current before applying any pipeline configuration.

Author

Malik Saqib

I craft short, practical AI & web dev articles. Follow me on LinkedIn.