{"name":"Neexo AI Hub","url":"https://awesome.neexo.dk","generatedAt":"2026-05-16T12:01:25.534Z","categories":[{"category":"agents","entries":[{"slug":"code-reviewer","category":"agents","data":{"name":"Neexo Code Reviewer","description":"A focused reviewer agent for Neexo projects that prioritizes bugs, security issues, tenant isolation, production risk, and missing validation over style feedback.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","tags":["code-review","security","neexo"],"featured":true},"content":"\n## Overview\n\nUse this agent when a change needs a high-signal review before merge. It should inspect only the diff or explicitly scoped files, then report issues that matter in production.\n\n## Review Focus\n\n- Logic bugs and edge cases\n- Security and privacy risks\n- Tenant or organization isolation\n- Data loss and migration risk\n- Missing validation or tests\n- Deployment and runtime failures\n\n## Cost Discipline\n\nFor routine reviews, use scoped diffs or selected files. Avoid broad repository scans unless the task is explicitly a repository-wide audit.\n","lastUpdated":"2026-05-16T11:43:58+02:00"},{"slug":"consensus-review","category":"agents","data":{"name":"Neexo Consensus Review","description":"A multi-perspective review pattern for high-risk changes where architecture, security, performance, and UX concerns need separate review passes.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","tags":["code-review","architecture","high-risk"],"featured":false},"content":"\n## Overview\n\nUse consensus review sparingly for high-risk changes such as auth, billing, database schema, tenant isolation, production deployment, or large cross-domain refactors.\n\n## When to Use\n\n- Auth or billing logic changes\n- Database schema migrations affecting production data\n- Tenant isolation boundary changes\n- Large refactors spanning 5+ files across domains\n- Security-sensitive changes (CORS, CSP, secrets handling)\n\n## Cost Warning\n\nMulti-perspective review can be expensive under AI Credit billing. Prefer a single reviewer for normal PRs. Use consensus review only when the risk justifies the cost.\n\n## Review Perspectives\n\nEach pass focuses on one concern:\n\n| Pass | Focus | Key Questions |\n|------|-------|---------------|\n| **Architecture** | Structure, boundaries, coupling | Does this change respect module boundaries? Will it cause cascading changes? |\n| **Security** | Auth, secrets, injection, OWASP | Are role checks preserved? Is user input validated? |\n| **Performance** | Queries, caching, bundle size | Are there N+1 queries? Does this affect page load? |\n| **UX** | User flow, error states, accessibility | Does the happy path work? Are error states handled? |\n\n## How to Invoke\n\n```markdown\nReview this PR from four perspectives: architecture, security, performance, and UX.\nFor each perspective, list findings with severity (critical/high/medium/low),\nthe affected file, and a specific recommendation. Dismiss style-only findings.\n```\n\n## Output\n\nEach reviewer should provide concrete findings with:\n- **Severity**: critical / high / medium / low\n- **Evidence**: specific code reference or line\n- **Affected file**: path to the file\n- **Recommendation**: specific fix, not vague advice\n\nDismiss style-only or subjective findings that do not affect correctness, security, or performance.\n","lastUpdated":"2026-05-16T12:39:57+02:00"},{"slug":"studio-editor","category":"agents","data":{"name":"Neexo Studio Editor Agent","description":"A specialized agent pattern for 3D configurator studio work, including editor flows, machine configuration, asset metadata, and viewer handoff.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","tags":["configurator","3d","studio"],"featured":false},"content":"\n## Overview\n\nUse this agent pattern for studio-facing configurator work where UI state, 3D metadata, asset references, and publishing workflows meet.\n\nThe agent should behave like an editor engineer, not a generic UI helper. It needs to understand the split between a mutable studio workspace and a stable published viewer state, and it should preserve that boundary whenever it changes data models, editor panels, preview rendering, or publish behavior.\n\n## Responsibilities\n\n- Inspect shared schemas, editor state stores, asset metadata types, and viewer read models before changing behavior\n- Keep draft, preview, and published configuration states distinct\n- Prefer deterministic serialization for machine, variant, material, and scene settings\n- Preserve undo/redo and dirty-state semantics when editing studio controls\n- Treat asset paths, thumbnails, CAD/GLB metadata, and generated previews as references unless the task explicitly requires binary asset work\n- Validate that published viewer payloads remain backward compatible\n\n## Editor State Pattern\n\n1. Identify the canonical source of truth: schema, store, URL state, database row, or generated artifact\n2. Update derived editor state through existing actions or reducers\n3. Keep transient UI state local to the panel or tool surface\n4. Ensure publish/export code receives validated data, not raw form state\n5. Add focused tests or smoke checks around serialization, preview loading, and publish handoff\n\n## Asset Workflow\n\n- Read metadata and manifests before opening large geometry, texture, or render files\n- Do not duplicate assets to solve reference bugs; fix the reference or manifest mapping\n- Preserve units, coordinate systems, orientation, pivot assumptions, and material names\n- Keep generated thumbnails and previews reproducible from the source asset pipeline\n- Call out when a change requires re-exporting assets from Blender, Unity, or an external configurator tool\n\n## Publishing Handoff\n\nBefore changing publish behavior, verify:\n\n- which fields are draft-only and which fields are viewer-facing\n- whether the viewer supports older published payloads\n- whether generated bundles include asset fingerprints or version metadata\n- how errors are surfaced to studio users during validation or upload\n- how rollback works when a publish partially succeeds\n\n## Guardrails\n\n- Read schemas and shared types before changing editor behavior\n- Prefer asset metadata over loading binary files\n- Keep studio write paths separate from published viewer read paths\n- Validate changes with focused type checks and browser smoke tests\n- Do not silently change published payload shape without migration or compatibility notes\n","lastUpdated":"2026-05-16T13:53:13+02:00"},{"slug":"worker","category":"agents","data":{"name":"Neexo Worker Agent","description":"A background-job agent pattern for queue processing, render jobs, asset pipelines, and other asynchronous workflows in Neexo projects.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","tags":["worker","jobs","automation"],"featured":false},"content":"\n## Overview\n\nUse this agent pattern for changes to background jobs, workers, queue handlers, render pipelines, or other asynchronous task processors.\n\n## When to Use\n\n- Editing queue consumer/producer code\n- Adding or modifying scheduled jobs (cron, intervals)\n- Changing render pipeline steps or asset post-processing\n- Reviewing retry logic, dead-letter queues, or failure recovery\n\n## Guardrails\n\n- **Idempotency** — verify that reprocessing the same job produces the same result without side effects\n- **Retry behavior** — check retry limits, exponential backoff, and dead-letter handling\n- **Failure paths** — ensure failed jobs log actionable context (job ID, attempt count, error) without leaking secrets or customer data\n- **Payload schemas** — keep job payloads versioned and narrow; avoid passing entire database rows\n- **Timeouts** — set explicit timeouts on job execution to prevent zombie workers\n\n## Validation\n\n- Run the smallest available worker or queue test before broader integration tests\n- For render jobs, validate with a single lightweight asset before triggering full batch renders\n- Check that failed jobs appear in monitoring/alerting and are not silently dropped\n\n## Example Agent Instruction\n\n```markdown\nYou are a worker-agent. Before modifying any job handler:\n1. Read the job schema and check recent processing logs\n2. Verify idempotency — would rerunning this job cause duplicate side effects?\n3. Confirm retry/failure behavior is tested\n4. Keep the change scoped — do not refactor unrelated worker code\n```\n","lastUpdated":"2026-05-16T12:39:57+02:00"}]},{"category":"instructions","entries":[{"slug":"cost-awareness","category":"instructions","data":{"name":"Cost-Aware AI Instructions","description":"Always-on guidance for keeping AI context small, avoiding expensive agent sessions, and classifying tasks before broad edits.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","applyTo":["**/*"],"tags":["cost","ai-credits","guardrails"],"featured":true},"content":"\n## Overview\n\nThese instructions are the baseline for AI-assisted work in Neexo repositories. They help agents keep context small, avoid broad scans, and treat high-risk changes as planning tasks before implementation.\n\n## Core Rule\n\nNever combine an expensive model, large repository context, and a vague task. That pattern is the fastest route to unnecessary AI Credit usage.\n\n## Task Classes\n\n- LOW: 1-3 files, clear local change\n- MEDIUM: 3-7 files or uncertain scope\n- HIGH: auth, security, billing, database schema, tenant isolation, deployment, AI/model usage, or broad architecture changes\n\nLOW tasks can be edited directly. MEDIUM tasks should start with a short plan. HIGH tasks should stop after analysis unless explicitly approved.\n\n## Tooling Rule\n\nPrefer local CLI tools over MCP servers when a suitable CLI exists. CLIs are often cheaper for context because the agent can ask for `--help`, inspect only the relevant command, and receive compact output.\n\nUse MCP when the task needs interactive remote capabilities, richer API context, or cross-system access that the CLI does not provide.\n\nIf an approved Neexo Windows AI toolbelt CLI is missing and the task benefits from it, install it with `winget`, verify it with `--version`, and continue. Ask before installing other tools or modifying PowerShell profile files.\n\nExample: prefer GitHub CLI (`gh`) for routine issue, PR, repo, and workflow queries. Use GitHub MCP when the work benefits from persistent API tooling or agent-managed GitHub operations.\n","lastUpdated":"2026-05-16T11:43:58+02:00"},{"slug":"clerk-auth","category":"instructions","data":{"name":"Clerk Authentication","description":"Authentication patterns for Next.js apps using Clerk, including middleware setup, role-based access, organization scoping, and dev-mode fallbacks.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","applyTo":["**/*.{ts,tsx}","**/proxy.ts","**/middleware.ts"],"tags":["auth","clerk","nextjs"],"featured":false},"content":"\n## Overview\n\nClerk provides authentication, user management, and organization-based RBAC for Next.js apps. In Neexo projects, Clerk handles sign-in, sign-up, organization switching, and role enforcement.\n\n## Next.js 16 Middleware\n\nNext.js 16 uses `proxy.ts` (named `proxy` export) instead of `middleware.ts`:\n\n```tsx\n// src/proxy.ts\nimport { clerkMiddleware, createRouteMatcher } from \"@clerk/nextjs/server\";\n\nconst isPublicRoute = createRouteMatcher([\"/\", \"/sign-in(.*)\", \"/sign-up(.*)\", \"/api/public(.*)\"]);\n\nexport const proxy = clerkMiddleware(async (auth, request) => {\n  if (!isPublicRoute(request)) {\n    await auth.protect();\n  }\n});\n\nexport const config = {\n  matcher: [\"/((?!_next|[^?]*\\\\.(?:html?|css|js(?!on)|jpe?g|webp|png|gif|svg|ttf|woff2?|ico|csv|docx?|xlsx?|zip|webmanifest)).*)\"],\n};\n```\n\n## Auth Context Helper\n\n```tsx\nimport { auth } from \"@clerk/nextjs/server\";\n\nexport async function getAuthContext() {\n  const { userId, orgId, orgRole } = await auth();\n  \n  if (!userId || !orgId) {\n    throw new Error(\"Unauthorized\");\n  }\n  \n  return { userId, orgId, orgRole };\n}\n```\n\n## Role-Based Access\n\n```tsx\nconst STUDIO_ROLES = new Set([\"org:admin\", \"org:editor\"]);\n\nexport function hasStudioAccess(orgRole: string | undefined) {\n  return orgRole ? STUDIO_ROLES.has(orgRole) : false;\n}\n\n// In an API route or server action:\nconst { orgRole } = await getAuthContext();\nif (!hasStudioAccess(orgRole)) {\n  return new Response(\"Forbidden\", { status: 403 });\n}\n```\n\n## Organization Scoping\n\nEvery database query must filter by organization:\n\n```tsx\n// ✅ Correct — always scope by orgId\nconst items = await db.select().from(machines).where(eq(machines.orgId, orgId));\n\n// ❌ Wrong — returns data across all organizations\nconst items = await db.select().from(machines);\n```\n\n## Dev Mode Fallback\n\nFor local development without Clerk keys:\n\n```tsx\nconst DEV_USER_ID = \"dev-user\";\nconst DEV_ORG_ID = \"dev-org\";\nconst DEV_ORG_ROLE = \"org:admin\";\n\nexport async function getAuthContext() {\n  if (process.env.NODE_ENV === \"development\" && !process.env.CLERK_SECRET_KEY) {\n    return { userId: DEV_USER_ID, orgId: DEV_ORG_ID, orgRole: DEV_ORG_ROLE };\n  }\n  // ... real auth\n}\n```\n\n## Common Pitfalls\n\n- Do not call `auth()` in client components — it's server-only\n- Use `useUser()` and `useOrganization()` for client-side auth state\n- Wrap Clerk components (UserButton, OrganizationSwitcher) in safe client wrappers\n- Never expose `CLERK_SECRET_KEY` to the client — only `NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY`\n","lastUpdated":"2026-05-16T12:55:32+02:00"},{"slug":"cli-first-tooling","category":"instructions","data":{"name":"CLI-First Tooling Instructions","description":"Guidance for preferring compact local CLI tools over MCP servers when they provide the same capability with less context and lower AI cost.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","applyTo":["**/*.{ps1,sh,bash,zsh,fish,bat,cmd,json,yml,yaml,md}"],"tags":["cli","mcp","cost"],"featured":false},"content":"\n## Overview\n\nAgents should prefer local CLI tools when they are available and suitable for the task. A CLI can usually explain itself through `--help`, expose focused subcommands, and return compact output.\n\n## Rule\n\nUse a CLI first when:\n\n- the task is narrow and command-based\n- the CLI is already installed\n- `--help` or `help <command>` gives enough guidance\n- the output can be filtered or scoped\n\nUse MCP when:\n\n- the task needs richer remote API access\n- the agent benefits from structured tool calls\n- the task spans several resources or systems\n- the CLI is missing important capabilities\n\n## Installation\n\nIf an approved Neexo Windows AI toolbelt CLI is missing, install it with `winget`, verify it with `--version`, and continue. Ask before installing other tools or modifying PowerShell profile files.\n\nExample: use GitHub CLI (`gh`) for routine issue, PR, repository, and workflow operations. Use GitHub MCP when persistent GitHub API tooling or richer agent-managed operations are needed.\n\n## Preferred Commands\n\n| Need | Prefer | Example |\n|---|---|---|\n| Find files | `rg --files` or `fd` | `rg --files content plugins` |\n| Search text | `rg` | `rg \"copilot plugin\" docs content` |\n| Inspect JSON | `jq` | `jq '.plugins[].name' .github/plugin/marketplace.json` |\n| Inspect YAML | `yq` | `yq '.includes.instructions' content/plugins/neexo-guardrails.md` |\n| GitHub routine work | `gh` | `gh pr view --json title,files,reviews` |\n| AST-aware code search | `sg` | `sg -p 'useEffect($$$)' website/src` |\n| Repo size overview | `tokei` | `tokei website/src content` |\n\n## Output Rules\n\n- Scope commands to the smallest folder or file set that can answer the question.\n- Filter large output with `--json`, `--fields`, `Select-Object`, `jq`, or `rg -n`.\n- Prefer read-only commands before commands that write files, open browser sessions, or call remote APIs.\n- For interactive commands, answer one prompt at a time and never route secrets through chat.\n","lastUpdated":"2026-05-16T13:53:13+02:00"},{"slug":"figma-hmi-standards","category":"instructions","data":{"name":"Figma HMI Design Standards","description":"Design-to-code standards for Figma-based HMI (Human-Machine Interface) projects — Figma plugin architecture, monorepo conventions, OPC-UA integration, and runtime rendering patterns.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","applyTo":["**/*.ts","**/*.tsx"],"tags":["figma","hmi","monorepo","typescript"],"featured":false},"content":"\n## Monorepo Structure\n\nHMI compiler projects use a pnpm monorepo with `@figma-hmi/*` aliases:\n\n| Package | Purpose |\n|---|---|\n| `schema` | Domain types + Zod validation (base dependency) |\n| `design` | Design document structure + CSS rendering |\n| `protocol` | WebSocket message schemas |\n| `runtime-core` | Pure business logic — no React |\n| `runtime-web` | React UI + connectors (mock, WebSocket) |\n| `compiler` | CLI to package into deployable artifact |\n| `figma-plugin` | Figma plugin UI |\n| `opcua-gateway` | OPC-UA server bridge |\n\nCross-package imports use `@figma-hmi/*` aliases. Internal deps use `workspace:*` protocol.\n\n## TypeScript Rules\n\n- Target ES2022, strict mode, ESM (`\"type\": \"module\"`)\n- Explicit `type` imports: `import type { Foo } from '...'`\n- `node:` prefix for Node.js built-ins: `import { resolve } from 'node:path'`\n- Define explicit return types on functions\n- Discriminated unions + Zod schemas for domain types; infer via `z.infer<typeof Schema>`\n\n## React (Runtime Web)\n\n- React 19, functional components only\n- Wrap presentational components in `memo()` with callback props\n- State: local `useState` only — no global state manager\n- Async effects with cancellation: `let cancelled = false` + cleanup return\n- Inline styles via `CSSProperties` — no Tailwind, no CSS modules\n- Props: explicit interface (`interface FooProps { ... }`)\n\n## Figma Plugin Architecture\n\nThe Figma plugin bridges the design tool with the HMI compiler:\n\n1. **Read**: Extract component hierarchy, styles, variables, and interactive states from Figma\n2. **Transform**: Convert to the `@figma-hmi/schema` format (Zod-validated)\n3. **Bundle**: `_generated-project-bundle.ts` is auto-generated from the Figma design\n4. **Deploy**: Compiler packages into Docker container with OPC-UA gateway\n\nRebuild runtime-web before regenerating the bundle:\n```bash\ncorepack pnpm build\nnode apps/figma-plugin/scripts/build.mjs\n```\n\n## OPC-UA Integration\n\n- The `opcua-gateway` bridges HMI runtime to industrial OPC-UA servers\n- WebSocket protocol defined in `@figma-hmi/protocol` with Zod schemas\n- All message types are discriminated unions for type safety\n- Gateway runs as Docker container alongside the compiled HMI frontend\n\n## Testing\n\n- Vitest with `describe`/`it` syntax\n- Test files in `test/` folder per package, named `*.test.ts`\n- React tests: `createRoot` + `act()`, DOM queries via `querySelector`\n- Run per-package: `corepack pnpm --filter <package> test`\n\n## Code Style\n\n- PascalCase: components, types, schemas\n- camelCase: functions, variables, properties\n- `as const` arrays for enum-like values\n- Section headers: `// ─── Section Name ───────────────`\n- Pure functions, immutable patterns — return new objects, don't mutate\n","lastUpdated":"2026-05-16T13:04:56+02:00"},{"slug":"blender-python","category":"instructions","data":{"name":"Neexo Blender Python Instructions","description":"Guidance for Blender Python scripts, add-ons, subprocess rendering, generated assets, and 3D pipeline automation.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","applyTo":["**/*.py"],"tags":["blender","python","3d"],"featured":false},"content":"\n## Overview\n\nUse these instructions for Blender add-ons, render-studio automation, and configurator asset pipelines.\n\n## API and Scripting Rules\n\n- Use `bpy.data`, `bpy.context`, and `bpy.ops` through the stable API — avoid internal/undocumented modules\n- Register operators and panels with proper `bl_idname`, `bl_label`, and `bl_options`\n- Always provide `poll()` methods on operators to prevent invalid context execution\n- Use `bpy.app.handlers` for persistent callbacks; clean up handlers on add-on unregister\n\n## Asset Pipeline Rules\n\n- Treat large GLB, EXR, HDR, and render files as assets, not chat context — summarize metadata instead\n- Prefer metadata summaries and small config files before reading generated output\n- Use relative paths in `.blend` files for portability across machines\n\n## Subprocess and Rendering\n\n- Keep subprocess execution deterministic and logged without exposing secrets\n- Use `--background` and `--python` flags for headless Blender rendering:\n  ```bash\n  blender --background scene.blend --python render_script.py -- --output /tmp/render\n  ```\n- Parse arguments after `--` using `sys.argv[sys.argv.index(\"--\") + 1:]`\n- Validate with focused scripts or tests before broad render runs\n\n## Common Pitfalls\n\n- Do not call `bpy.ops` outside the correct context — use context overrides or `temp_override()`\n- Do not block the UI thread with long operations — use modal operators or background threads for heavy work\n- Always check `bpy.app.version` when targeting multiple Blender versions\n","lastUpdated":"2026-05-16T12:39:57+02:00"},{"slug":"drizzle-database","category":"instructions","data":{"name":"Neexo Drizzle Database Instructions","description":"Database and migration guidance for Drizzle ORM, Drizzle Kit, Neon, tenant-scoped queries, atomic schema changes, and production-safe data access.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","applyTo":["**/*{schema,migration,db,sql,drizzle,neon}*"],"tags":["database","drizzle","neon"],"featured":false},"content":"\n## Overview\n\nDatabase work is high-risk in Neexo projects because schema changes often combine tenant scoping, production data, and generated migrations.\n\n## Schema and Migration Rules\n\n- Inspect the relevant schema, migration, query, or action before changing database logic\n- Keep schema changes, generated SQL, and dependent code in the **same commit** — never split across commits\n- Avoid destructive migrations by default — prefer additive changes (new columns, new tables)\n- Use `drizzle-kit generate` to create migration SQL after schema edits\n- Use `drizzle-kit push` for local development against throwaway databases\n- Use `drizzle-kit migrate` for production deployments with migration tracking\n\n## Drizzle Kit Workflow\n\n```bash\n# 1. Edit schema file (e.g. src/db/schema.ts)\n# 2. Push to local dev database\nnpx drizzle-kit push\n\n# 3. Generate migration SQL for production\nnpx drizzle-kit generate\n\n# 4. Commit schema + migration + code together\ngit add -A && git commit -m \"feat(db): add user preferences table\"\n```\n\n## Query Patterns\n\n- Preserve organization, customer, and tenant filters on every query — never return unscoped data\n- Use Drizzle's type-safe query builder; avoid raw SQL unless performance requires it\n- Use `drizzle-kit studio` to inspect data during development — never query production directly\n- Prefer soft delete (`deletedAt` timestamp) for financial, audit, and compliance data\n\n## Common Pitfalls\n\n- Do not run `drizzle-kit push` against production — it applies changes without migration tracking\n- Do not rename columns in production without a two-step migration (add new → migrate data → drop old)\n- Do not use `drizzle-kit drop` without explicit team approval\n","lastUpdated":"2026-05-16T12:39:57+02:00"},{"slug":"nextjs-conventions","category":"instructions","data":{"name":"Neexo Next.js Conventions","description":"Shared App Router, React 19, TypeScript, UI, and validation conventions for Neexo's Next.js 16 projects.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","applyTo":["**/*.{ts,tsx}","**/app/**/*.tsx"],"tags":["nextjs","react","typescript"],"featured":false},"content":"\n## Overview\n\nNeexo projects default to Next.js 16 App Router patterns, server components, narrow TypeScript boundaries, and existing local UI conventions.\n\n## Server vs Client Components\n\n- Prefer server components unless hooks, browser APIs, or event handlers require client components\n- Keep server/client boundaries explicit — mark client components with `\"use client\"` at the top\n- Use server actions for mutations; do not create API routes for simple form submissions\n\n## Next.js 16 Specifics\n\n- `params` and `searchParams` are now async in route components — always `await` them:\n  ```tsx\n  export default async function Page({ params }: { params: Promise<{ slug: string }> }) {\n    const { slug } = await params\n  }\n  ```\n- `cookies()` and `headers()` from `next/headers` are async — `await` before reading\n- Turbopack is the default bundler in development — do not add Webpack-specific config unless required for production\n- Use the `use` hook from React 19 to unwrap promises in client components\n- Use `\"use cache\"` for fine-grained caching instead of relying solely on route segment config\n\n## TypeScript and Code Style\n\n- Avoid `any` unless justified at an integration boundary\n- Reuse existing components, hooks, utilities, and design tokens\n- Add dependencies only when the existing stack cannot reasonably solve the problem\n- Prefer `satisfies` for type narrowing of config objects\n\n## Patterns to Avoid\n\n- Do not use `getServerSideProps` or `getStaticProps` — these are Pages Router patterns\n- Do not wrap server components in unnecessary client wrappers\n- Do not use `useEffect` for data fetching — fetch in server components or use server actions\n","lastUpdated":"2026-05-16T12:39:57+02:00"},{"slug":"playwright-testing","category":"instructions","data":{"name":"Neexo Playwright Testing Instructions","description":"Focused E2E testing guidance for Playwright specs, browser smoke tests, selectors, screenshots, and validation commands.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","applyTo":["**/*{test,spec,e2e,playwright}*.{ts,tsx,js,jsx}"],"tags":["testing","playwright","e2e"],"featured":false},"content":"\n## Overview\n\nUse these instructions when adding or reviewing browser tests. Prefer focused tests over full-suite runs for local logic.\n\n## Locator Strategy\n\nUse stable, user-visible locators in priority order:\n\n1. `getByRole('button', { name: 'Submit' })` — best for accessibility\n2. `getByText('Welcome')` — for visible text content\n3. `getByTestId('checkout-form')` — when role/text is ambiguous\n4. `locator('[data-testid=\"...\"]')` — fallback for complex selectors\n\nAvoid fragile selectors: `nth-child`, deep CSS paths, auto-generated class names.\n\n## Test Structure\n\n- One assertion per logical check — avoid mega-tests that assert 10 things\n- Use `test.describe` to group related scenarios\n- Use `test.beforeEach` for shared navigation, not repeated `goto()` calls\n- Name tests by user behavior: `\"user can add item to cart\"`, not `\"test button click\"`\n\n## Reliability Rules\n\n- Never use `page.waitForTimeout()` — use `expect(locator).toBeVisible()` or `waitForResponse()`\n- Check console errors for UI changes: `page.on('console', ...)`\n- Use `expect(page).toHaveURL()` instead of checking raw URL strings\n- Run the smallest relevant test first — `npx playwright test path/to/file.spec.ts`\n\n## Screenshots and Visual Testing\n\n- Use `await expect(page).toHaveScreenshot()` for visual regression\n- Avoid `whileInView` animations and lazy-loaded images in screenshot tests — they produce flaky baselines\n- Use `page.setViewportSize()` for consistent screenshot dimensions\n\n## Patterns to Avoid\n\n- Timing-based waits (`setTimeout`, `waitForTimeout`)\n- Selectors coupled to CSS framework class names (`.css-1a2b3c`)\n- Tests that depend on external API state without mocking\n- Full test suite runs for single-file logic changes\n","lastUpdated":"2026-05-16T12:39:57+02:00"},{"slug":"security-owasp","category":"instructions","data":{"name":"Neexo Security and OWASP Instructions","description":"Security review guidance covering OWASP Top 10 risks, auth boundaries, secrets handling, tenant isolation, and production-safe configuration.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","applyTo":["**/*.{ts,tsx,js,jsx,json,yml,yaml,env,md}"],"tags":["security","owasp","privacy"],"featured":false},"content":"\n## Overview\n\nUse these instructions when editing code or configuration that can affect authentication, authorization, tenant boundaries, storage permissions, secrets, billing, or production deployment.\n\n## OWASP Top 10 Awareness\n\nWatch for these categories in every code change:\n\n1. **Broken Access Control** — verify role checks and tenant scoping on every endpoint and server action\n2. **Cryptographic Failures** — never hardcode secrets, use environment variables and proper key management\n3. **Injection** — parameterize all database queries; never interpolate user input into SQL, shell commands, or templates\n4. **Insecure Design** — validate business logic constraints server-side, not only in the UI\n5. **Security Misconfiguration** — review CORS, CSP, exposed headers, verbose error messages, and default credentials\n6. **Vulnerable Components** — audit dependencies regularly; keep frameworks and libraries updated\n7. **Authentication Failures** — enforce strong session management, rate limiting, and MFA where available\n8. **Data Integrity Failures** — verify signatures, checksums, and pipeline integrity for CI/CD and software updates\n9. **Logging and Monitoring Gaps** — log security events but never log secrets, tokens, or PII\n10. **Server-Side Request Forgery (SSRF)** — validate and restrict outbound URLs; do not allow user input to control fetch targets\n\n## Required Checks\n\n- Do not expose secrets, tokens, keys, signed URLs, or private customer data\n- Preserve role checks and organization/customer scoping on every mutation\n- Keep server-side validation in place — never rely solely on client-side checks\n- Avoid logging sensitive values (tokens, passwords, PII)\n- Call out any weakening of security controls explicitly in PR descriptions\n\n## Patterns to Reject\n\n- `dangerouslySetInnerHTML` with unsanitized user input\n- String concatenation in SQL queries instead of parameterized statements\n- Disabled CSRF protection without documented justification\n- Broad CORS origins (`*`) in production\n- Secrets committed to version control, even in \"test\" files\n","lastUpdated":"2026-05-16T12:39:57+02:00"},{"slug":"next-intl","category":"instructions","data":{"name":"next-intl Internationalization","description":"Conventions for multi-language Next.js apps using next-intl, including translation patterns, locale detection, and common pitfalls.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","applyTo":["**/*.{ts,tsx}","**/messages/**"],"tags":["i18n","nextjs","next-intl"],"featured":false},"content":"\n## Overview\n\nnext-intl provides type-safe internationalization for Next.js App Router apps. Neexo projects typically use Danish as the primary language with English and optionally German as secondaries.\n\n## Setup\n\nTranslation files live in `messages/` as JSON:\n\n```\nmessages/\n  da.json    # Danish (primary source)\n  en.json    # English\n  de.json    # German (optional)\n```\n\n## Translation Pattern\n\n```tsx\n\"use client\";\n\nimport { useTranslations } from \"next-intl\";\n\nexport default function HeroSection() {\n  const t = useTranslations(\"hero\");\n  return (\n    <section>\n      <h1>{t(\"title\")}</h1>\n      <p>{t(\"description\")}</p>\n    </section>\n  );\n}\n```\n\n## Rules\n\n- **Danish is the source language** — always write Danish text first, then translate to other languages\n- Use `useTranslations()` hook in client components — never inline text strings\n- Namespace translations by feature/page: `hero.title`, `contact.submit`, `nav.home`\n- Keep translation keys in kebab-case or camelCase — be consistent within the project\n\n## Message File Structure\n\n```json\n{\n  \"nav\": {\n    \"home\": \"Forside\",\n    \"about\": \"Om os\",\n    \"contact\": \"Kontakt\"\n  },\n  \"hero\": {\n    \"title\": \"Industriel 3D-visualisering\",\n    \"description\": \"Vi omdanner tekniske CAD-data til visuelle oplevelser\"\n  }\n}\n```\n\n## Language Switching\n\nFor apps without locale in the URL (single-domain approach):\n\n```tsx\n\"use client\";\n\nimport { useLocale } from \"next-intl\";\n\nfunction LanguageSwitcher() {\n  const locale = useLocale();\n  // Switch by updating cookie or context\n}\n```\n\n## Danish Text Rules\n\n- Use proper Danish characters: æ, ø, å — never ae, oe, aa\n- Do not use em-dashes (—) in translations — rephrase with commas or \"herunder\"\n- Avoid AI/corporate buzzwords: \"transformér\", \"unik\", \"robust\", \"gnidningsfrit\"\n- Write naturally as a Dane would speak\n\n## Common Pitfalls\n\n- Do not use `useTranslations()` in Server Components — use `getTranslations()` instead\n- Keep `defaultLocale` set to `\"da\"` in the i18n config\n- Ensure all three language files have the same keys — missing keys cause runtime fallback warnings\n- Do not create inline translation objects — always use the message files\n","lastUpdated":"2026-05-16T12:55:32+02:00"},{"slug":"react-three-fiber","category":"instructions","data":{"name":"React Three Fiber 3D","description":"Conventions for React Three Fiber (R3F) components including dynamic imports, SSR avoidance, camera patterns, and postprocessing in Next.js apps.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","applyTo":["**/*{canvas,scene,viewer,3d,three,r3f}*.{ts,tsx}"],"tags":["react-three-fiber","threejs","3d","nextjs"],"featured":false},"content":"\n## Overview\n\nReact Three Fiber (R3F) renders Three.js scenes as React components. In Next.js, all 3D code must be client-only to avoid SSR hydration errors.\n\n## Critical: No SSR for 3D\n\nThree.js requires browser APIs (WebGL, canvas). Always use dynamic import with `ssr: false`:\n\n```tsx\n// In a Server Component or page\nimport dynamic from \"next/dynamic\";\n\nconst Scene3D = dynamic(() => import(\"./Scene3D\"), { ssr: false });\n\nexport default function Page() {\n  return <Scene3D />;\n}\n```\n\nThe Scene3D component itself must have `\"use client\"` at the top.\n\n## Canvas Setup Pattern\n\n```tsx\n\"use client\";\n\nimport { Canvas } from \"@react-three/fiber\";\nimport { Environment, OrbitControls } from \"@react-three/drei\";\n\nexport default function Scene3D() {\n  return (\n    <Canvas camera={{ position: [0, 2, 5], fov: 45 }}>\n      <ambientLight intensity={0.5} />\n      <directionalLight position={[5, 5, 5]} />\n      <Environment preset=\"studio\" />\n      <OrbitControls />\n      {/* Your 3D content */}\n    </Canvas>\n  );\n}\n```\n\n## GLB Model Loading\n\n```tsx\nimport { useGLTF } from \"@react-three/drei\";\n\nfunction MachineModel({ url }: { url: string }) {\n  const { scene } = useGLTF(url);\n  return <primitive object={scene} />;\n}\n\n// Preload for instant display\nuseGLTF.preload(\"/models/machine.glb\");\n```\n\n## Module Visibility Toggle\n\nFor configurators where modules can be toggled on/off:\n\n```tsx\nimport { useEffect, useRef } from \"react\";\nimport { useGLTF } from \"@react-three/drei\";\nimport * as THREE from \"three\";\n\nfunction ConfigurableModel({ url, visibleModules }: { url: string; visibleModules: Set<string> }) {\n  const { scene } = useGLTF(url);\n  \n  useEffect(() => {\n    scene.traverse((node: THREE.Object3D) => {\n      if (node.name && visibleModules !== undefined) {\n        node.visible = visibleModules.has(node.name);\n      }\n    });\n  }, [scene, visibleModules]);\n  \n  return <primitive object={scene} />;\n}\n```\n\n## Postprocessing\n\n```tsx\nimport { EffectComposer, N8AO, Bloom, ToneMapping } from \"@react-three/postprocessing\";\n\n<EffectComposer>\n  <N8AO aoRadius={0.5} intensity={1} />\n  <Bloom luminanceThreshold={0.9} intensity={0.5} />\n  <ToneMapping />\n</EffectComposer>\n```\n\n## Common Pitfalls\n\n- **Never** import Three.js at the top level of a Server Component — use dynamic imports\n- **Never** use `useEffect` for animations — use R3F's `useFrame` hook instead\n- Avoid creating new `THREE.Vector3()` or `THREE.Color()` inside render — allocate once and reuse\n- Use `<Suspense fallback={...}>` around heavy models for loading states\n- Set `reactStrictMode: false` in `next.config.ts` — R3F is not compatible with strict mode double-rendering\n- For WebGPU detection, check `navigator.gpu` before attempting WebGPU renderer\n","lastUpdated":"2026-05-16T12:55:32+02:00"},{"slug":"shadcn-ui-v4","category":"instructions","data":{"name":"shadcn/ui v4 (Base UI)","description":"Instructions for shadcn/ui v4 which uses Base UI primitives — use the render prop for composition, not Radix asChild.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","applyTo":["**/*.{ts,tsx}","**/components/**"],"tags":["shadcn","ui","base-ui","react"],"featured":false},"content":"\n## Overview\n\nshadcn/ui v4 switched from Radix UI to Base UI (`@base-ui/react`) under the hood. The most important change is how composition works.\n\n## Critical: render Prop, Not asChild\n\n```tsx\n// ✅ Correct — shadcn/ui v4 (Base UI)\n<Button render={<Link href=\"/dashboard\" />}>Go to Dashboard</Button>\n\n// ❌ Wrong — this is Radix UI / shadcn v3 syntax\n<Button asChild><Link href=\"/dashboard\">Go to Dashboard</Link></Button>\n```\n\nThis is the most common mistake when moving to shadcn/ui v4. `asChild` does not exist in Base UI components.\n\n## Component Installation\n\n```bash\nnpx shadcn@latest add button dialog select\n```\n\nComponents are installed to `components/ui/`. Do not barrel-export them — import directly:\n\n```tsx\nimport { Button } from \"@/components/ui/button\";\nimport { Dialog, DialogContent, DialogTrigger } from \"@/components/ui/dialog\";\n```\n\n## Styling Conventions\n\n- Use `cn()` from `@/lib/utils` for conditional classes:\n  ```tsx\n  import { cn } from \"@/lib/utils\";\n  <div className={cn(\"rounded-xl p-4\", isActive && \"ring-2 ring-primary\")} />\n  ```\n- Use Tailwind utility classes — do not create custom CSS for component styling\n- Respect existing design tokens in `globals.css`\n- Generated `components/ui/` files are exempt from file-size limits\n\n## Patterns\n\n### Dialog with Form\n\n```tsx\n<Dialog>\n  <DialogTrigger render={<Button />}>Open</DialogTrigger>\n  <DialogContent>\n    <form action={submitAction}>\n      {/* form fields */}\n    </form>\n  </DialogContent>\n</Dialog>\n```\n\n### Select with Controlled Value\n\n```tsx\n<Select value={selected} onValueChange={setSelected}>\n  <SelectTrigger>\n    <SelectValue placeholder=\"Choose...\" />\n  </SelectTrigger>\n  <SelectContent>\n    <SelectItem value=\"a\">Option A</SelectItem>\n    <SelectItem value=\"b\">Option B</SelectItem>\n  </SelectContent>\n</Select>\n```\n\n## Do Not\n\n- Use `asChild` — it does not exist in Base UI\n- Create new button styles — use `variant` prop on `<Button>`\n- Barrel-export from `components/ui/` — import each component directly\n- Override shadcn component internals unless absolutely necessary\n","lastUpdated":"2026-05-16T12:55:32+02:00"},{"slug":"unity-csharp-standards","category":"instructions","data":{"name":"Unity C# Standards","description":"Coding standards for Unity C# projects — MonoBehaviour patterns, singleton guards, custom update cycles, serialization rules, and performance-safe hot-path practices.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","applyTo":["**/*.cs"],"tags":["unity","csharp","game-development"],"featured":false},"content":"\n## Naming Conventions\n\n- PascalCase for public fields, properties, methods, and class names.\n- camelCase for private fields, local variables, and parameters.\n- Prefix interfaces with `I` (e.g., `IUpdateListener`).\n- Use descriptive names — avoid abbreviations unless universally understood (`fps`, `id`).\n\n## MonoBehaviour Lifecycle\n\n- `Awake()` for self-initialization: singleton setup, internal state, component caching.\n- `Start()` only when initialization depends on other objects being ready.\n- `OnEnable()` / `OnDisable()` for registering / unregistering listeners — prevents stale references.\n- Prefer `FixedUpdate()` for physics and time-critical logic.\n- Avoid per-frame `Update()` when a custom update cycle (via a central manager) is available.\n\n## Singleton Pattern\n\n```csharp\npublic class GameManager : MonoBehaviour\n{\n    public static GameManager Instance { get; private set; }\n\n    private void Awake()\n    {\n        if (Instance != null && Instance != this)\n        {\n            Destroy(gameObject);\n            return;\n        }\n        Instance = this;\n        DontDestroyOnLoad(gameObject);\n    }\n}\n```\n\nRules:\n- Guard against duplicates in `Awake()` with `Destroy(this)` or `Destroy(gameObject)`.\n- Call `DontDestroyOnLoad(gameObject)` for persistent managers.\n- Never use singletons for data that should be per-scene.\n\n## Serialization & Inspector\n\n- Use `[SerializeField]` for private fields that need Inspector exposure — never make them public just for the Inspector.\n- Use `[System.Serializable]` for nested data classes shown in the Inspector.\n- Avoid exposing runtime-only fields to the Inspector.\n- Use `[Header(\"Section\")]` and `[Tooltip(\"...\")]` to organize complex inspectors.\n\n## Performance (Hot Paths)\n\n- **Zero allocations** in `Update`, `FixedUpdate`, and listener callbacks.\n- Cache component references in `Awake()` — never call `GetComponent<T>()` per frame.\n- Pre-allocate `List<T>` with expected capacity.\n- Prefer `foreach` over LINQ in hot paths — LINQ allocates enumerators.\n- Use object pooling for frequently instantiated/destroyed objects.\n- Prefer `CompareTag(\"Enemy\")` over `gameObject.tag == \"Enemy\"` (avoids string alloc).\n\n## Interfaces & Custom Update Cycles\n\n```csharp\npublic interface IUpdateListener\n{\n    void OnTick(float deltaTime);\n}\n\n// Register in OnEnable, unregister in OnDisable\nprivate void OnEnable() => UpdateManager.Instance.Register(this);\nprivate void OnDisable() => UpdateManager.Instance.Unregister(this);\n```\n\nBenefits:\n- Decoupled communication between systems.\n- Controlled execution order via the manager.\n- Easy to pause/resume groups of listeners.\n\n## ScriptableObject for Data\n\n- Use `ScriptableObject` for shared configuration, game balance data, and event channels.\n- Create assets via `[CreateAssetMenu(fileName = \"New Config\", menuName = \"Game/Config\")]`.\n- Prefer ScriptableObject events over direct references for loose coupling.\n\n## Assembly Definitions\n\n- Use `.asmdef` files to split code into assemblies for faster compile times.\n- Separate Editor code into its own assembly with `Editor` platform only.\n- Keep test assemblies separate with `[TestAssembly]` references.\n\n## Code Style\n\n- One primary responsibility per script/class.\n- Keep methods under ~30 lines — extract helpers for complex logic.\n- Use enums for fixed value sets (e.g., `UpdateCycle`, `DamageType`).\n- Use `#region` sparingly — prefer small classes over large regions.\n- Always add `<summary>` XML doc comments on public types and methods.\n\n## Common Pitfalls\n\n- **Coroutine leak**: Always `StopCoroutine` or `StopAllCoroutines` in `OnDisable`.\n- **Null after Destroy**: Unity overloads `==` on `UnityEngine.Object` — use `if (obj != null)` or the null-conditional pattern, but never rely on C# `is null`.\n- **Awake order**: Don't depend on `Awake()` order between scripts — use `Start()` or explicit initialization order via `[DefaultExecutionOrder]`.\n- **String-based APIs**: Avoid `SendMessage`, `Invoke(\"MethodName\")` — use direct calls or events.\n","lastUpdated":"2026-05-16T13:04:56+02:00"},{"slug":"zod-v4","category":"instructions","data":{"name":"Zod v4 Validation","description":"Guidance for using Zod v4 correctly, including the required import path, schema patterns, and type inference in TypeScript projects.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","applyTo":["**/*.{ts,tsx}"],"tags":["zod","validation","typescript"],"featured":false},"content":"\n## Overview\n\nZod v4 changed the import path. Projects using Zod v4 **must** import from `\"zod/v4\"`, never from `\"zod\"` directly.\n\n## Critical Import Rule\n\n```tsx\n// ✅ Correct — Zod v4\nimport { z } from \"zod/v4\";\n\n// ❌ Wrong — imports Zod v3 API even if v4 is installed\nimport { z } from \"zod\";\n```\n\nThis is the single most common mistake in Zod v4 codebases and causes subtle runtime issues.\n\n## Schema Patterns\n\n```tsx\nimport { z } from \"zod/v4\";\n\n// Define schemas\nconst UserSchema = z.object({\n  id: z.string().uuid(),\n  email: z.string().email(),\n  name: z.string().min(1).max(100),\n  role: z.enum([\"admin\", \"editor\", \"viewer\"]),\n  createdAt: z.date(),\n});\n\n// Infer TypeScript type from schema\ntype User = z.infer<typeof UserSchema>;\n\n// Record requires two arguments in Zod v4\nconst MetadataSchema = z.record(z.string(), z.unknown());\n```\n\n## Validator Organization\n\n- One validator file per domain: `src/lib/validators/users.ts`, `src/lib/validators/orders.ts`\n- Export named schemas and their inferred types together\n- Keep schemas close to where they are used (actions, API routes, forms)\n\n## Server Action Pattern\n\n```tsx\nimport { z } from \"zod/v4\";\n\nconst CreateUserSchema = z.object({\n  email: z.string().email(),\n  name: z.string().min(1),\n});\n\nexport async function createUser(input: unknown) {\n  const parsed = CreateUserSchema.safeParse(input);\n  if (!parsed.success) {\n    return { success: false, error: parsed.error.message };\n  }\n  // Use parsed.data — fully typed\n}\n```\n\n## Common Pitfalls\n\n- `z.record()` requires two arguments in Zod v4: `z.record(z.string(), z.unknown())` — one argument throws\n- `z.coerce.date()` parses strings to dates — useful for form inputs\n- Do not mix Zod v3 and v4 imports in the same project\n","lastUpdated":"2026-05-16T12:55:32+02:00"}]},{"category":"skills","entries":[{"slug":"code-review","category":"skills","data":{"name":"Neexo Code Review Skill","description":"A focused review skill for Neexo changes that reports real bugs, security issues, tenant isolation problems, and production risks.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","tags":["code-review","security","neexo"],"featured":false},"content":"\n## Overview\n\nUse this skill when reviewing a PR, staged changes, or a scoped diff. It should stay high-signal and avoid broad style feedback.\n\n## Review Priorities\n\n- Bugs and logic errors\n- Security vulnerabilities\n- Tenant or organization scoping mistakes\n- Data loss and migration risks\n- Missing validation or meaningful test coverage\n\n## Cost Discipline\n\nReview selected files or diffs. Avoid scanning the whole repository unless the user explicitly asks for a repository-wide audit.\n","lastUpdated":"2026-05-16T11:43:58+02:00"},{"slug":"audit-feature","category":"skills","data":{"name":"Neexo Feature Audit Skill","description":"Audits an existing feature for correctness, missing validation, domain-rule gaps, test coverage, and production risk.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","tags":["audit","feature","review"],"featured":false},"content":"\n## Overview\n\nUse this skill after a feature is built, before it is shipped, or when a domain needs a health check.\n\n## Methodology\n\n1. **Scope** — identify the feature boundaries: which files, routes, APIs, and database tables are involved\n2. **Walk the happy path** — trace the primary user flow end-to-end and check for correctness\n3. **Probe edge cases** — test boundary values, empty states, concurrent access, and error conditions\n4. **Check security surface** — verify auth, roles, tenant scoping, and input validation\n5. **Review data layer** — check migrations, query correctness, and cascading deletes/updates\n6. **Assess test coverage** — identify untested paths and suggest focused tests\n7. **Evaluate production readiness** — check logging, monitoring, feature flags, and rollback plan\n\n## Audit Areas\n\n| Area | What to Check |\n|------|--------------|\n| Domain rules | Business logic constraints, edge cases, state transitions |\n| Auth and roles | Role checks on every endpoint, tenant isolation, permission escalation |\n| Database | Migration safety, query performance, missing indexes, cascading effects |\n| UI states | Loading, empty, error, success states; accessibility; responsive layout |\n| Test coverage | Missing unit/integration/E2E tests for critical paths |\n| Error handling | Graceful degradation, user-facing error messages, retry behavior |\n\n## Output Format\n\nFor each finding, provide:\n\n- **Severity**: critical / high / medium / low\n- **Area**: which audit area (domain, auth, db, UI, test, error)\n- **File**: affected file path\n- **Finding**: what is wrong or missing\n- **Recommendation**: specific fix or test to add\n","lastUpdated":"2026-05-16T12:39:57+02:00"},{"slug":"plan-feature-change","category":"skills","data":{"name":"Neexo Feature Planning Skill","description":"Plans medium and high-risk feature changes before implementation, including scope, touched files, validation, and rollout risks.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","tags":["planning","architecture","implementation"],"featured":false},"content":"\n## Overview\n\nUse this skill before implementation when a feature crosses multiple modules or touches high-risk areas.\n\n## When to Plan\n\n- Feature touches 3+ modules or services\n- Database schema changes are required\n- Auth, billing, or tenant isolation is affected\n- Breaking changes to APIs or data formats\n- Rollback would be difficult without a plan\n\n## Output Template\n\n### 1. Goal and Non-Goals\n\n- **Goal**: what this feature delivers to the user\n- **Non-goals**: what is explicitly out of scope\n\n### 2. Files and Ownership\n\nList the files/modules that will be modified and who owns each area.\n\n### 3. Data and Schema Changes\n\n- New tables, columns, or indexes\n- Migration strategy (additive-only preferred)\n- Backward compatibility with existing data\n\n### 4. Auth and Validation\n\n- Which endpoints need role checks\n- Input validation rules\n- Tenant scoping implications\n\n### 5. Test Plan\n\n- Unit tests for business logic\n- Integration tests for cross-module flows\n- E2E tests for user-facing scenarios\n- Edge cases to cover\n\n### 6. Rollout and Migration Risks\n\n- Feature flag strategy\n- Database migration ordering (migration SQL must ship with code)\n- Rollback plan if deployment fails\n- Monitoring and alerting changes\n\n### 7. Open Questions\n\nList unresolved decisions that need input before implementation starts.\n","lastUpdated":"2026-05-16T12:39:57+02:00"},{"slug":"git-commit","category":"skills","data":{"name":"Neexo Git Commit Skill","description":"A full commit workflow skill that inspects diffs, checks repository history, verifies generated files and changelog dates, and produces a commit-ready message.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","tags":["git","commit","changelog","workflow"],"featured":false},"content":"\n## Overview\n\nThis skill is for repositories where a commit is not just a message but a final quality gate. It covers both quick conventional commit message generation and the fuller review workflow for higher-risk changes.\n\n## Workflow\n\n1. **Inspect changes** — read staged and unstaged diffs with `git diff --cached` and `git diff`\n2. **Check recent history** — run `git log --oneline -10` to match existing commit style\n3. **Verify generated files** — ensure schema changes include their migration SQL, lock files are updated, build output is excluded\n4. **Check changelog date** — if a changelog entry is staged, verify the date is correct (run `node -e \"console.log(new Date().toISOString().slice(0,10))\"`)\n5. **Produce message** — write a conventional commit message that matches the repository's style\n\n## Message Rules\n\n- Use conventional commit format: `type(scope): subject`\n- Valid types: `feat`, `fix`, `docs`, `style`, `refactor`, `test`, `chore`, `perf`, `ci`, `build`\n- Keep subject lines under 72 characters, imperative mood, with no trailing period\n- Prefer outcome over implementation detail: `fix: prevent duplicate order emails` rather than `fix: add check in sendEmail function`\n\n## Quality Gates\n\n- Schema changes, generated migrations, and dependent code **must** be in the same commit\n- Danish changelog text must use æ, ø, and å correctly — never ae, oe, aa\n- Never set `featured: true` without justification\n- Do not commit `.env` files, secrets, or large binary assets\n\n## Examples\n\n```text\nfeat(auth): add MFA support for organization admins\nfix(billing): prevent duplicate invoice generation on retry\ndocs: update Drizzle migration workflow in README\nchore(deps): bump next from 16.1.0 to 16.2.6\n```\n","lastUpdated":"2026-05-16T13:53:13+02:00"},{"slug":"sync-prod-db","category":"skills","data":{"name":"Neexo Production Database Sync Skill","description":"Guides careful production database sync and inspection workflows with explicit safety checks, environment boundaries, and rollback awareness.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","tags":["database","production","safety"],"featured":false},"content":"\n## Overview\n\nUse this skill only for approved database maintenance workflows. It is designed to make environment boundaries and destructive risk visible.\n\nThis skill is for inspection, controlled sync, and recovery-aware database work. It should slow the agent down around production data, force explicit source/target naming, and make backup and rollback expectations visible before any write operation.\n\n## Safety Checks\n\n- Confirm source and target environments\n- Never print secrets or connection strings\n- Prefer read-only inspection before writes\n- Call out destructive operations before execution\n- Record validation and rollback considerations\n\n## Required Flow\n\n1. **Identify environments** — name the source and target clearly, including provider/project names when visible.\n2. **Verify credentials safely** — confirm which env vars or secret names are being used without printing secret values.\n3. **Inspect first** — run read-only schema/table checks before any write, migration, import, or export.\n4. **Back up target** — require a backup, snapshot, or export path before modifying production-like data.\n5. **Plan rollback** — state exactly how to restore or undo the operation if validation fails.\n6. **Execute narrowly** — prefer scoped tables, filtered records, dry-runs, and transaction boundaries.\n7. **Validate** — compare row counts, checksums, schema versions, and application smoke checks after the sync.\n\n## Environment Detection\n\n- Treat names containing `prod`, `production`, `live`, or customer identifiers as production-risk environments.\n- Treat `main`, `master`, and Vercel production deployments as production-risk unless the repo proves otherwise.\n- Do not infer safety from local shell names alone; inspect explicit env vars, config files, or deployment metadata.\n- If `NODE_ENV=production`, `VERCEL_ENV=production`, or a production database URL is active, stop before writes and ask for confirmation.\n\n## Drizzle and Migration Rules\n\n- Keep schema changes, generated migration SQL, and dependent code in the same commit.\n- Prefer `drizzle-kit check` or equivalent validation before applying migrations.\n- Never edit an already-applied production migration in place; add a forward migration instead.\n- For destructive changes, write the data preservation or backfill step explicitly.\n\n## Rollback Checklist\n\n- Backup location and timestamp\n- Restore command or provider snapshot link\n- Tables or tenants affected\n- Post-restore validation query\n- Owner who approved the operation\n","lastUpdated":"2026-05-16T13:53:13+02:00"}]},{"category":"plugins","entries":[{"slug":"neexo-guardrails","category":"plugins","data":{"name":"Neexo Guardrails Plugin","description":"Cost-aware AI coding guardrails for Neexo repositories, including scoped context rules, security checks, database safety, testing guidance, and reusable prompts.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","includes":{"agents":[],"skills":[],"instructions":["cost-awareness","security-owasp","drizzle-database","playwright-testing","nextjs-conventions","blender-python","cli-first-tooling","zod-v4","shadcn-ui-v4","react-three-fiber","next-intl","clerk-auth","unity-csharp-standards","figma-hmi-standards"]},"tags":["guardrails","cost","security"],"featured":true},"content":"\n## Overview\n\nNeexo Guardrails helps developers and agents avoid the expensive pattern of vague tasks, large repository context, and high-cost models.\n\n## Includes\n\n- Cost-awareness instructions\n- Security and privacy instructions\n- Database and migration safety instructions\n- Testing and Next.js instructions\n- Asset and generated-file instructions\n- Scoped review, repository audit, monthly usage review, and guardrails installation prompts\n\n## Install\n\n```bash\ncopilot plugin install neexo-guardrails@awesome-neexo\n```\n\nAfter installation, open Copilot Chat Diagnostics in VS Code and confirm the plugin instructions are loaded.\n","lastUpdated":"2026-05-16T13:04:56+02:00"},{"slug":"neexo-developer","category":"plugins","data":{"name":"Neexo Developer Plugin","description":"The baseline developer plugin for Neexo teams, bundling focused code review and full git commit workflow support for everyday engineering.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","includes":{"agents":["code-reviewer"],"skills":["code-review","git-commit"]},"tags":["neexo","developer-workflow","code-review"],"featured":false},"content":"\n## Overview\n\nInstall this plugin for everyday Neexo development work. It provides a focused reviewer and a full git commit workflow that follows the same high-signal style used in active Neexo repositories.\n\n## Install\n\n```bash\ncopilot plugin install neexo-developer@awesome-neexo\n```\n\n## Best Use\n\nUse this plugin for scoped PR review, staged-change summaries, and commit preparation. Pair it with Neexo Guardrails for cost-aware AI behavior.\n","lastUpdated":"2026-05-16T13:53:13+02:00"},{"slug":"neexo-figma","category":"plugins","data":{"name":"Neexo Figma Plugin","description":"Figma-to-code workflow plugin for Neexo teams, bundling HMI design standards with Figma MCP setup guidance.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","includes":{"agents":[],"instructions":["figma-hmi-standards"],"skills":[],"tools":["figma-mcp-server"]},"tags":["figma","design-to-code","hmi"],"featured":false},"content":"\n## Overview\n\nInstall this plugin when a project uses Figma as a source of truth for HMI, web UI, or design-to-code workflows. It pairs Neexo's Figma HMI conventions with a ready MCP server configuration.\n\n## Install\n\n```bash\ncopilot plugin install neexo-figma@awesome-neexo\n```\n\n## Best Use\n\nUse it for Figma frame implementation, HMI compiler work, component/token sync, and workflows where design context should be fetched directly from Figma before code is changed.","lastUpdated":"2026-05-16T13:53:13+02:00"},{"slug":"neexo-unity","category":"plugins","data":{"name":"Neexo Unity Plugin","description":"Unity workflow plugin for Neexo teams, bundling Unity C# standards with MCP Unity setup guidance.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","includes":{"agents":[],"instructions":["unity-csharp-standards"],"skills":[],"tools":["mcp-unity"]},"tags":["unity","mcp","game-development"],"featured":false},"content":"\n## Overview\n\nInstall this plugin for Unity projects where Copilot should understand Neexo's Unity C# standards and the MCP Unity workflow.\n\n## Install\n\n```bash\ncopilot plugin install neexo-unity@awesome-neexo\n```\n\n## Best Use\n\nUse it for Unity 6+ projects, editor tooling, prefab and scene work, tests, and AI-assisted MCP Unity operations that inspect the editor before making changes.","lastUpdated":"2026-05-16T13:53:13+02:00"}]},{"category":"hooks","entries":[{"slug":"pre-commit-lint","category":"hooks","data":{"name":"Pre-Commit Lint","description":"Runs ESLint and Prettier on staged files before each commit, blocking commits that introduce lint errors.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","event":"pre-commit","command":"npx lint-staged","tags":["lint","eslint","prettier","quality"],"featured":true},"content":"\n## Overview\n\nAutomatically validates code quality before each commit. Runs ESLint and Prettier on staged files only — fast, targeted, and non-intrusive.\n\n## What It Does\n\n1. Intercepts `git commit`\n2. Runs `lint-staged` on staged `.ts`, `.tsx`, `.js`, `.jsx` files\n3. Blocks the commit if any lint errors are found\n4. Allows warnings to pass through\n\n## Configuration\n\nAdd to your `lint-staged` config:\n\n```json\n{\n  \"*.{ts,tsx}\": [\"eslint --fix\", \"prettier --write\"],\n  \"*.{js,jsx}\": [\"eslint --fix\", \"prettier --write\"],\n  \"*.{json,md}\": [\"prettier --write\"]\n}\n```\n","lastUpdated":"2026-05-08T13:29:32+02:00"},{"slug":"post-push-notify","category":"hooks","data":{"name":"Post-Push Notify","description":"Sends a Slack or Teams notification after a successful push, including branch name, commit count, and summary.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","event":"post-push","command":"node .github/hooks/notify.mjs","tags":["notifications","slack","teams"],"featured":false},"content":"\n## Overview\n\nKeeps the team informed about pushes by sending a formatted notification to Slack or Microsoft Teams after each push.\n\n## What Gets Sent\n\n- Branch name\n- Number of commits pushed\n- One-line summary of each commit\n- Author name\n- Link to the compare view on GitHub\n\n## Setup\n\n1. Set `SLACK_WEBHOOK_URL` or `TEAMS_WEBHOOK_URL` in your environment\n2. Add the hook script to `.github/hooks/notify.mjs`\n3. Register via Copilot hooks configuration\n","lastUpdated":"2026-05-08T13:29:32+02:00"}]},{"category":"workflows","entries":[{"slug":"auto-pr-review","category":"workflows","data":{"name":"Auto PR Review","description":"Enable Copilot-powered code review on every pull request via GitHub's built-in code review feature, posting inline comments automatically.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","tags":["github","code-review","automation","copilot"],"featured":true},"content":"\n## Overview\n\nGitHub Copilot Code Review analyzes every pull request diff and posts inline comments focusing on bugs, security issues, and logic errors. It runs as a built-in GitHub feature — no custom GitHub Action required.\n\n## How to Enable\n\n1. Go to your repository or organization **Settings → Copilot → Code Review**\n2. Enable **Copilot Code Review** for pull requests\n3. Optionally configure a `copilot-review-config.yml` in `.github/` to customize review behavior\n\n## How It Works\n\n1. A PR is opened or updated\n2. Copilot automatically analyzes the diff\n3. Inline comments are posted as a review on relevant lines\n4. A summary comment provides an overall assessment\n\n## Requesting Review Manually\n\nYou can also request Copilot as a reviewer on any PR:\n\n1. Open the PR → **Reviewers** → add **Copilot**\n2. Or use GitHub CLI:\n\n```bash\ngh pr edit <number> --add-reviewer copilot\n```\n\n## Optional: Trigger Review via Workflow\n\nTo ensure Copilot review is always requested on new PRs automatically:\n\n```yaml\nname: Request Copilot Review\non:\n  pull_request:\n    types: [opened, synchronize]\n\njobs:\n  request-review:\n    runs-on: ubuntu-latest\n    steps:\n      - name: Add Copilot as reviewer\n        env:\n          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n        run: gh pr edit ${{ github.event.pull_request.number }} --add-reviewer copilot --repo ${{ github.repository }}\n```\n\n## Tips\n\n- Copilot Code Review works best with clear PR descriptions and focused diffs\n- It catches common bugs, security anti-patterns, and logic errors\n- For high-risk changes, combine with human review — Copilot supplements but does not replace reviewers\n","lastUpdated":"2026-05-16T12:39:57+02:00"},{"slug":"auto-label","category":"workflows","data":{"name":"Auto Label Issues","description":"GitHub Actions workflow that reads issue content and automatically applies relevant labels based on keyword matching and configurable rules.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","tags":["github-actions","issues","automation","triage"],"featured":false},"content":"\n## Overview\n\nEliminates manual issue triage by automatically applying labels when issues are created, based on configurable keyword patterns in `.github/labeler.yml`.\n\n## Labels Applied\n\n- `bug`, `feature`, `question`, `docs` — matched by keywords in issue title and body\n- `priority:high`, `priority:low` — matched by urgency keywords\n- Language/framework labels — matched by code fence language identifiers\n\n## Example Workflow\n\n```yaml\nname: Auto Label\non:\n  issues:\n    types: [opened]\n\njobs:\n  label:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: github/issue-labeler@v4\n        with:\n          repo-token: ${{ secrets.GITHUB_TOKEN }}\n          configuration-path: .github/labeler.yml\n```\n\n## Configuration\n\nCreate `.github/labeler.yml` with keyword-to-label mappings:\n\n```yaml\nbug:\n  - \"(crash|error|broken|fail|exception)\"\nfeature:\n  - \"(feature request|enhancement|add support)\"\nquestion:\n  - \"(how to|question|help|is there a way)\"\ndocs:\n  - \"(documentation|readme|typo|docs)\"\n```\n\n## Note\n\nThis uses keyword-based pattern matching, not AI classification. For AI-powered triage, consider combining with a Copilot agent that reads the issue and suggests labels via the GitHub API.\n","lastUpdated":"2026-05-16T12:39:57+02:00"}]},{"category":"tools","entries":[{"slug":"figma-mcp-server","category":"tools","data":{"name":"Figma MCP Server","description":"Official Figma MCP server for extracting design context, generating code from frames, syncing variables and components, and writing directly to the Figma canvas from your AI coding agent.","author":"Figma","source":"https://github.com/figma/mcp-server-guide","category":"mcp-server","tags":["mcp","figma","design-to-code"],"featured":true},"content":"\n## Overview\n\nThe official Figma MCP Server (1.4k+ stars) brings Figma directly into your AI coding workflow. It provides structured design data — components, variables, layout, and assets — to AI agents generating code from Figma design files.\n\nAvailable as a **remote Streamable HTTP** server at `https://mcp.figma.com/mcp` — no local installation needed.\n\n## Key Features\n\n- **Generate code from selected frames** — select a Figma frame, paste the link in your IDE, and get code\n- **Extract design context** — pull variables, components, and layout data into your IDE\n- **Write to the canvas** (beta) — create and modify native Figma content directly from your MCP client\n- **Code Connect** — reuse your actual codebase components for consistent code generation\n- **Generate Figma designs from web pages** (rolling out) — capture and convert web pages into Figma designs\n\n## Core Tools\n\n| Tool | Purpose |\n|---|---|\n| `get_design_context` | Structured React + Tailwind representation of a Figma selection |\n| `get_variable_defs` | Extract variables and styles (color, spacing, typography tokens) |\n| `get_screenshot` | Visual reference of the node variant being implemented |\n| `get_metadata` | High-level node map for large selections |\n| `use_figma` | Write operations — create/modify content on the canvas |\n| `search_design_system` | Search components, variables, and text styles |\n| `generate_figma_design` | Create Figma designs from code or web pages |\n\n## Installation\n\n### VS Code\n\n1. `⌘ Shift P` → `MCP: Add Server`\n2. Select `HTTP`\n3. Paste `https://mcp.figma.com/mcp`\n4. Server ID: `figma`\n\nOr add to `.vscode/mcp.json`:\n\n```json\n{\n  \"servers\": {\n    \"figma\": {\n      \"type\": \"http\",\n      \"url\": \"https://mcp.figma.com/mcp\"\n    }\n  }\n}\n```\n\n### Cursor\n\nIn agent chat: `/add-plugin figma`\n\n### Claude Code\n\n```bash\nclaude plugin install figma@claude-plugins-official\n```\n\n### Other editors\n\nAny editor supporting Streamable HTTP:\n\n```json\n{\n  \"mcpServers\": {\n    \"figma\": {\n      \"url\": \"https://mcp.figma.com/mcp\"\n    }\n  }\n}\n```\n\n## Best Practices\n\n### Structure your Figma file\n\n- Use **components** for anything reused (buttons, cards, inputs)\n- Link components to your codebase via **Code Connect**\n- Use **variables** for spacing, color, radius, and typography\n- Name layers semantically (`CardContainer`, not `Group 5`)\n- Use **Auto layout** to communicate responsive intent\n- Add **annotations** and **dev resources** for behavior that's hard to capture visually\n\n### Write effective prompts\n\n- Specify your framework: \"Generate iOS SwiftUI code from this frame\"\n- Specify your design system: \"Use Chakra UI for this layout\"\n- Specify file paths: \"Add this to `src/components/ui/PricingCard.tsx`\"\n- Specify layout systems: \"Use our `Stack` layout component\"\n\n### Recommended agent rules\n\n```markdown\n## Figma MCP Integration Rules\n\n### Required flow (do not skip)\n1. Run get_design_context first for the exact node(s)\n2. If truncated, run get_metadata then re-fetch specific nodes\n3. Run get_screenshot for visual reference\n4. Download any assets needed, then implement\n5. Translate output into project conventions and tokens\n6. Validate against Figma for 1:1 visual parity\n\n### Asset rules\n- If the server returns a localhost source for an image/SVG, use it directly\n- Do NOT import new icon packages — all assets come from the Figma payload\n- Do NOT create placeholders if a localhost source is provided\n```\n\n### Break down large selections\n\nLarge selections cause errors or incomplete responses. Generate code for smaller sections (Card, Header, Sidebar) and compose them.\n\n## Rate Limits\n\n- Starter plan / View seats: up to 6 tool calls per month\n- Dev or Full seat on paid plans: per-minute limits matching Figma REST API Tier 1\n- Write operations (`use_figma`) are exempt from rate limits during beta\n\n## Community Agent Skills\n\nThe Figma community maintains open-source agent skills at [figma/community-resources](https://github.com/figma/community-resources). Categories include:\n\n- **Components**: `design-react-api` (propose React API from Figma components), `reconstruct-component-figma` (rebuild as Atomic Design system)\n- **Design Systems**: `ds-init-figma` (create complete design system on canvas), `ds-compliance-audit` (audit for token/component compliance)\n- **Design Generation**: `bridge-ds` (generate designs bound to your design system)\n- **Accessibility**: `apca-compliance-figma` (APCA contrast compliance audit)\n- **Design Process**: `screens-to-ia` (generate information architecture from screens)\n","lastUpdated":"2026-05-16T13:04:56+02:00"},{"slug":"github-cli","category":"tools","data":{"name":"GitHub CLI","description":"GitHub's official command-line tool for issues, pull requests, repositories, workflows, releases, and API calls. Prefer it before GitHub MCP for routine scoped GitHub tasks.","author":"github","source":"https://github.com/cli/cli","category":"cli-tool","tags":["github","cli","automation"],"featured":true},"content":"\n## Overview\n\nGitHub CLI (`gh`) is the preferred first tool for routine GitHub tasks because it is local, scriptable, easy to inspect with `gh help`, and returns focused command output.\n\n## Common Commands\n\n```bash\ngh issue list --repo NeexoCore/awesome-neexo\ngh pr list --repo NeexoCore/awesome-neexo\ngh workflow list --repo NeexoCore/awesome-neexo\ngh api repos/NeexoCore/awesome-neexo\n```\n\n## CLI vs MCP\n\nPrefer `gh` when the task is narrow: list issues, inspect a PR, view workflow runs, create an issue, or call one API endpoint.\n\nUse GitHub MCP when the agent needs richer structured GitHub access across multiple operations or when the workflow benefits from MCP tool integration.\n","lastUpdated":"2026-05-16T11:43:58+02:00"},{"slug":"github-mcp-server","category":"tools","data":{"name":"GitHub MCP Server","description":"Official GitHub MCP server — gives Copilot agents access to GitHub APIs: issues, PRs, repos, code search, and more.","author":"github","source":"https://github.com/github/github-mcp-server","category":"mcp-server","tags":["mcp","github","official"],"featured":true},"content":"\n## Overview\n\nThe official GitHub MCP server from GitHub. Enables Copilot agents to:\n\n- Read/create/update issues and pull requests\n- Search code across repositories\n- Manage branches and commits\n- Access GitHub Actions workflows\n\n## Installation\n\n```bash\nnpx @github/github-mcp-server\n```\n\nOr configure in VS Code settings:\n\n```json\n{\n  \"mcp\": {\n    \"servers\": {\n      \"github\": {\n        \"command\": \"npx\",\n        \"args\": [\"-y\", \"@github/github-mcp-server\"],\n        \"env\": {\n          \"GITHUB_PERSONAL_ACCESS_TOKEN\": \"<your-token>\"\n        }\n      }\n    }\n  }\n}\n```\n","lastUpdated":"2026-05-08T13:29:32+02:00"},{"slug":"mcp-unity","category":"tools","data":{"name":"MCP Unity Server","description":"MCP bridge between Unity Editor and AI coding assistants — manipulate scenes, GameObjects, components, materials, and tests via 30+ tools from VS Code, Cursor, or Claude Code.","author":"CoderGamester","source":"https://github.com/CoderGamester/mcp-unity","category":"mcp-server","tags":["mcp","unity","game-development"],"featured":true},"content":"\n## Overview\n\nMCP Unity (v1.3.0, 1.7k+ stars) is the leading open-source MCP server for Unity Editor. It lets AI coding assistants interact with your Unity project in real time via a WebSocket bridge. The server runs as a Node.js process and connects to a Unity Editor package.\n\nRequires **Unity 6+** and **Node.js 18+**.\n\n## Key Tools\n\n| Tool | Purpose |\n|---|---|\n| `execute_menu_item` | Trigger any Unity menu item |\n| `select_gameobject` | Select objects in the hierarchy |\n| `update_gameobject` | Create or modify GameObject properties |\n| `update_component` | Add/update components and their fields |\n| `get_gameobject` | Inspect GameObject with all components |\n| `create_prefab` | Create prefabs with optional scripts |\n| `create_scene` / `load_scene` / `save_scene` | Scene management |\n| `move_gameobject` / `rotate_gameobject` / `scale_gameobject` | Transform operations |\n| `create_material` / `assign_material` / `modify_material` | Material workflow |\n| `run_tests` | Execute Unity Test Runner tests |\n| `add_package` | Install packages via Package Manager |\n| `batch_execute` | Atomic batch operations with rollback |\n| `get_console_logs` | Read Unity console with pagination |\n| `recompile_scripts` | Trigger script recompilation |\n\n## Resources\n\n- `unity://scenes-hierarchy` — Full scene hierarchy\n- `unity://gameobject/{id}` — Detailed component inspection\n- `unity://logs` — Console logs\n- `unity://packages` — Package Manager state\n- `unity://assets` — Asset Database queries\n- `unity://tests/{testMode}` — Test Runner listing\n\n## Installation\n\n### 1. Install the Unity package\n\n```\nWindow > Package Manager > + > Add package from git URL:\nhttps://github.com/CoderGamester/mcp-unity.git\n```\n\n### 2. Configure VS Code\n\nAdd to `.vscode/mcp.json`:\n\n```json\n{\n  \"servers\": {\n    \"mcp-unity\": {\n      \"type\": \"stdio\",\n      \"command\": \"node\",\n      \"args\": [\"<path-to-project>/Library/PackedCache/com.gamestar.mcp-unity@*/Server~/build/index.js\"]\n    }\n  }\n}\n```\n\nOr use the Unity Editor UI: **Tools > MCP Unity > Server Window > Configure**.\n\n### 3. Start the server\n\n**Tools > MCP Unity > Server Window > Start Server**, then open your AI coding IDE.\n\n## IDE Integration\n\nThe package automatically adds Unity's `Library/PackedCache` folder to your VS Code workspace for better code intelligence and AI context about Unity packages.\n\n## Tips\n\n- Break complex scene modifications into `batch_execute` calls for atomicity.\n- Use `get_console_logs` with error filter after operations to catch issues.\n- Start with `unity://scenes-hierarchy` to understand the scene before making changes.\n- The default WebSocket port is `8090` — configurable in the Server Window.\n\n## Unity's Official MCP Server (Beta)\n\nUnity also ships an official **Unity AI MCP Server** as part of the Unity AI Suite (beta). It requires Unity 6+ and a Unity Cloud link. The official server offers:\n- Agentic in-editor assistant tuned for Unity workflows\n- AI Gateway for connecting custom models\n- Asset generation from images and references\n\nThe official server is free during beta but will become a paid feature. For open-source, self-hosted workflows, use mcp-unity above.\n","lastUpdated":"2026-05-16T13:04:56+02:00"},{"slug":"awesome-neexo-cli","category":"tools","data":{"name":"awesome-neexo CLI","description":"A bootstrap CLI for adding Neexo AI Hub conventions, Copilot instructions, and optional MCP server config to new projects.","author":"NeexoCore","source":"https://github.com/NeexoCore/awesome-neexo","category":"cli-tool","tags":["cli","bootstrap","neexo"],"featured":false},"content":"\n## Overview\n\nUse `awesome-neexo` to initialize a project with Neexo's baseline AI-assisted development conventions.\n\n## Usage\n\n```bash\nnpx awesome-neexo init --figma\nnpx awesome-neexo init --unity\nnpx awesome-neexo init --figma --unity --force\n```\n\nThe CLI creates `.github/copilot-instructions.md` and, when requested, `.vscode/mcp.json` entries for Figma or Unity MCP workflows.","lastUpdated":"2026-05-16T13:53:13+02:00"},{"slug":"context7-mcp","category":"tools","data":{"name":"Context7 MCP Server","description":"Provides Copilot agents with up-to-date library documentation, pulling directly from official sources instead of relying on training data.","author":"Upstash","source":"https://github.com/upstash/context7","category":"mcp-server","tags":["mcp","documentation","context"],"featured":false},"content":"\n## Overview\n\nContext7 solves the \"stale knowledge\" problem. Instead of relying on Copilot's training data (which may be outdated), it fetches live documentation for any library. Maintained by Upstash with 55k+ stars on GitHub.\n\n## Use Cases\n\n- Get current API docs for a library that was updated after Copilot's training cutoff\n- Look up correct function signatures and parameters\n- Find usage examples from official documentation\n- Get version-specific docs by mentioning the version in your prompt\n\n## Quick Setup\n\nThe recommended way to set up Context7 is the interactive CLI:\n\n```bash\nnpx ctx7 setup\n```\n\nThis authenticates via OAuth, generates an API key, and installs the appropriate skill. Use `--cursor`, `--claude`, or `--opencode` to target a specific agent.\n\n## Manual MCP Configuration\n\nIf you prefer manual setup, use the Context7 server URL with your API key from [context7.com/dashboard](https://context7.com/dashboard):\n\n```json\n{\n  \"mcp\": {\n    \"servers\": {\n      \"context7\": {\n        \"url\": \"https://mcp.context7.com/mcp\"\n      }\n    }\n  }\n}\n```\n\n## Available Tools\n\n- `resolve-library-id` — resolves a library name into a Context7-compatible library ID\n- `query-docs` — retrieves documentation for a library using its Context7 ID\n\n## Usage\n\nOnce configured, agents fetch live docs automatically:\n\n```\n@agent What's the correct way to use React Server Components with the latest Next.js?\n```\n\nYou can also specify a library ID directly to skip the matching step:\n\n```\nImplement basic auth with Supabase. use library /supabase/supabase for API and docs.\n```\n\nThe agent pulls current docs via Context7 instead of guessing from training data.\n","lastUpdated":"2026-05-16T12:39:57+02:00"},{"slug":"playwright-mcp","category":"tools","data":{"name":"Playwright MCP","description":"MCP server for browser automation via Playwright — enables Copilot agents to navigate, click, fill forms, take screenshots, and run accessibility audits in a real browser.","author":"microsoft","source":"https://github.com/microsoft/playwright-mcp","category":"mcp-server","tags":["mcp","browser","testing","playwright"],"featured":false},"content":"\n## Overview\n\nMicrosoft's official Playwright MCP server. Gives Copilot agents full browser control:\n\n- Navigate pages and click elements\n- Fill forms and interact with UI\n- Take screenshots and accessibility snapshots\n- Run Lighthouse audits\n- Monitor network requests and console logs\n\n## Installation\n\n```json\n{\n  \"mcp\": {\n    \"servers\": {\n      \"playwright\": {\n        \"command\": \"npx\",\n        \"args\": [\"@playwright/mcp@latest\"]\n      }\n    }\n  }\n}\n```\n","lastUpdated":"2026-05-08T13:29:32+02:00"},{"slug":"unity-bridge","category":"tools","data":{"name":"Unity Bridge","description":"File-based protocol that lets any AI agent see and control Unity Editor through plain JSON files — 25 commands, lens system, screenshots, and texture catalog with zero server dependencies.","author":"cziberpv","source":"https://github.com/cziberpv/unity-bridge","category":"editor-extension","tags":["unity","game-development","file-based"],"featured":false},"content":"\n## Overview\n\nUnity Bridge (v1.3.1, 53+ stars) is a lightweight alternative to MCP-based Unity integrations. Instead of WebSockets or HTTP servers, it uses a simple file-polling protocol: the AI agent writes a JSON command to `request.json`, Unity Editor processes it, and the result appears in `response.md`. Works with any AI agent that can write files — Claude Code, Cursor, Copilot, or plain shell scripts.\n\nRequires **Unity 2021.3+**. No external dependencies beyond Newtonsoft.Json (ships with Unity).\n\n## Key Features\n\n| Feature | Description |\n|---|---|\n| **Lens system** | Filter inspect output by domain (`layout`, `physics`, `scripts`, `visual`, `all`) to minimize token usage |\n| **Screenshots** | Capture Game View by entering Play Mode automatically |\n| **Scratch pad** | Run arbitrary C# scripts inside the Editor via `BridgeScratch.cs` |\n| **Batch commands** | Send a JSON array, get a combined response |\n| **Compilation tracking** | `refresh` triggers recompilation and returns errors with file:line |\n| **Texture catalog** | Scan, search, tag, and preview project textures (experimental) |\n\n## Commands (25 total)\n\n**Read:** `scene`, `inspect`, `find`, `prefab`, `prefabs`, `selection`, `errors`, `help`\n**Write:** `create`, `delete`, `rename`, `duplicate`, `add-component`, `delete-component`, `set`, `save-scene`, `new-scene`, `open-scene`, `refresh`\n**Media:** `screenshot`, `texture-scan`, `texture-search`, `texture-preview`, `texture-tag`, `texture-tag-batch`\n**Script:** `scratch`\n\n## Unity Bridge vs MCP Unity\n\n| Aspect | Unity Bridge | MCP Unity |\n|---|---|---|\n| Protocol | File I/O (JSON → Markdown) | WebSocket / MCP |\n| Dependencies | None (ships with Unity) | Node.js 18+, MCP server |\n| Setup | One git URL in Package Manager | Install server + configure ports |\n| Agent support | Any agent that can write files | Requires MCP client support |\n| Tools | 25 commands + lens system | 30+ MCP tools + resources |\n\nChoose **Unity Bridge** for simplicity and universal agent compatibility. Choose **MCP Unity** for richer tool integration and IDE-native MCP support.\n\n## Installation\n\nIn Unity: Window → Package Manager → + → Add package from git URL:\n\n```\nhttps://github.com/cziberpv/unity-bridge.git\n```\n\nThe package auto-installs and copies `unity-cmd.ps1` to your project root.\n\n## Usage\n\n```powershell\n# View scene hierarchy\npowershell -File unity-cmd.ps1 '{\"type\": \"scene\"}'\n\n# Inspect with lens filtering\npowershell -File unity-cmd.ps1 '{\"type\": \"inspect\", \"path\": \"Player\", \"lens\": \"scripts\"}'\n\n# Create GameObject with components\npowershell -File unity-cmd.ps1 '[{\"type\": \"create\", \"path\": \"Enemy\", \"components\": [\"Rigidbody2D\", \"BoxCollider2D\"]}]'\n\n# Compile and check errors\npowershell -File unity-cmd.ps1 '{\"type\": \"refresh\"}' -Timeout 120\n```\n","lastUpdated":"2026-05-16T14:00:47+02:00"}]},{"category":"cookbook","entries":[{"slug":"cost-optimization-guide","category":"cookbook","data":{"name":"AI Cost Optimization Guide","description":"How Neexo developers can reduce AI Credit usage through model selection, scoped prompts, small contexts, and monthly usage review.","author":"NeexoCore","difficulty":"beginner","tags":["cost","ai-credits","billing"],"featured":true},"content":"\n## Core Principle\n\nThe expensive pattern is:\n\n```text\nexpensive model + large repository context + vague task\n```\n\nAvoid that combination.\n\n## Practical Rules\n\n1. Use autocomplete for routine coding.\n2. Use lightweight or default models for simple questions, small edits, and formatting.\n3. Reserve stronger models for architecture, hard debugging, or high-risk review.\n4. Give file or folder scope in every prompt.\n5. Split large work into smaller sessions.\n6. Use cost-conscious review prompts instead of broad repository review.\n\n## Monthly Review\n\nDownload the GitHub Copilot usage CSV and run the monthly usage review prompt from the Neexo Guardrails plugin. Look for top users, top models, outlier days, and agent sessions that used too much context.\n","lastUpdated":"2026-05-16T11:43:58+02:00"},{"slug":"bootstrap-new-project","category":"cookbook","data":{"name":"Bootstrap a New Neexo Project","description":"Step-by-step guide for setting up a new Neexo project with the recommended .github structure, plugin installation, and project-specific copilot-instructions.","author":"NeexoCore","difficulty":"beginner","tags":["onboarding","project-setup","copilot"],"featured":true},"content":"\n## Overview\n\nWhen starting a new Neexo project, the `.github/` structure is as important as the code. This recipe gives your project a Copilot setup that matches the maturity of neexo-internal and neexo-configurator from day one.\n\n## Step 1: Install the Guardrails Plugin\n\n```bash\n# Register the Neexo marketplace (one-time)\ncopilot plugin marketplace add NeexoCore/awesome-neexo\n\n# Install guardrails (cost, security, database, testing instructions)\ncopilot plugin install neexo-guardrails@awesome-neexo\n\n# Install developer tools (code review, commit messages)\ncopilot plugin install neexo-developer@awesome-neexo\n```\n\n## Step 2: Create the .github Structure\n\n```\n.github/\n├── copilot-instructions.md    # Project-specific rules\n├── instructions/               # Path-specific instructions\n│   └── (your domain files)\n├── agents/                     # Optional: specialized agents\n├── prompts/                    # Optional: scaffolding prompts\n├── skills/                     # Optional: custom skills\n└── workflows/                  # CI/CD\n```\n\n## Step 3: Write copilot-instructions.md\n\nUse this template and fill in your project's details:\n\n```markdown\n---\napplyTo: '**'\ndescription: 'Global project rules for <project-name>'\n---\n\n# <Project Name> — Project Rules\n\n## Stack\n\n- **Framework:** (e.g., Next.js 16 App Router)\n- **Language:** (e.g., TypeScript 5 strict)\n- **Styling:** (e.g., Tailwind CSS 4 + shadcn/ui v4)\n- **Database:** (e.g., Drizzle ORM + PostgreSQL/Neon)\n- **Auth:** (e.g., Clerk v7)\n- **Validation:** (e.g., Zod v4 — import from \"zod/v4\")\n- **Package manager:** (npm/pnpm)\n- **Deployment:** (e.g., Vercel via GitHub Actions)\n\n## Critical Conventions\n\n(List your non-negotiable rules here — things every agent must know)\n\n## Project Structure\n\n(Outline key directories and their purpose)\n\n## Build & Validation\n\n(List the commands to run before committing)\n\n## Do NOT\n\n(List common mistakes specific to your project)\n```\n\n## Step 4: Add Path-Specific Instructions\n\nCreate instruction files in `.github/instructions/` for specific domains:\n\n```markdown\n---\napplyTo: '**/db/**'\ndescription: 'Database and migration rules'\n---\n\n# Database Rules\n\n- Always scope queries by orgId\n- Keep schema + migration + code in the same commit\n```\n\n## Step 5: Verify Setup\n\nOpen VS Code, start a Copilot chat, and ask:\n\n```\nWhat instructions and plugins are loaded for this workspace?\n```\n\nUse **Copilot Chat Diagnostics** (Command Palette → \"Chat: Show Diagnostics\") to confirm all instructions are active.\n\n## Stack-Specific Recommendations\n\n### Next.js + Drizzle + Clerk (like neexo-internal)\n\nInstall guardrails + developer plugins, then add instructions for:\n- `nextjs-conventions` (from marketplace)\n- `drizzle-database` (from marketplace)\n- `clerk-auth` (from marketplace)\n- `zod-v4` (from marketplace)\n- `shadcn-ui-v4` (from marketplace)\n- Domain-specific rules in `.github/instructions/`\n\n### Next.js + R3F + i18n (like neexo-homepage)\n\nAdd instructions for:\n- `nextjs-conventions` (from marketplace)\n- `react-three-fiber` (from marketplace)\n- `next-intl` (from marketplace)\n- 3D component patterns in `.github/instructions/3d.instructions.md`\n\n### Python + Blender (like neexo-render-studio)\n\nAdd instructions for:\n- `blender-python` (from marketplace)\n- Subprocess and rendering rules in `.github/instructions/`\n\n### Unity C# (like neexo-unity)\n\nAdd instructions for:\n- Singleton patterns, versioning rules in `.github/instructions/`\n\n## Pro Tips\n\n- **Keep the copilot-instructions.md under 500 lines** — split domain knowledge into separate instruction files\n- **Add prompts for scaffolding** — if your team creates the same file types often (API routes, components, pages), create `.github/prompts/` templates\n- **Reference the llms.txt** — share `https://awesome.neexo.dk/api/llms.txt` with external AI tools that need Neexo context\n- **Review monthly** — as your project evolves, update instructions to match current patterns\n","lastUpdated":"2026-05-16T12:55:32+02:00"},{"slug":"create-custom-agent","category":"cookbook","data":{"name":"Create a Custom Agent","description":"How to write your own Copilot agent with a .agent.md file and optional MCP server connections.","author":"NeexoCore","difficulty":"intermediate","tags":["agents","customization","intermediate"],"featured":true},"content":"\n## Overview\n\nCustom agents let you create specialized Copilot personas with specific instructions, tools, and behaviors. This recipe shows you how to create one from scratch.\n\n## Steps\n\n### 1. Create the agent file\n\nCreate `.github/agents/my-agent.agent.md`:\n\n```markdown\n---\nname: my-agent\ndescription: \"Helps with database migrations and schema design\"\ntools:\n  - name: github\n  - name: postgres\n---\n\nYou are an expert database engineer. When asked about schema design:\n- Always suggest migrations, never direct ALTER TABLE\n- Use snake_case for column names\n- Add created_at and updated_at to every table\n- Prefer UUID over auto-increment for primary keys\n```\n\n### 2. Add MCP server connections (optional)\n\nReference MCP servers in the `tools` frontmatter:\n\n```yaml\ntools:\n  - name: github\n  - name: postgres\n    config:\n      connectionString: \"${DATABASE_URL}\"\n```\n\n### 3. Test the agent\n\n1. Open Copilot Chat in VS Code\n2. Type `@my-agent` to invoke it\n3. Ask: \"Design a schema for a blog with posts and comments\"\n\n## Best Practices\n\n- Keep the system prompt focused on one domain\n- List specific do's and don'ts\n- Reference MCP servers only when the agent needs external data\n- Test with real scenarios before sharing\n","lastUpdated":"2026-05-08T13:29:32+02:00"},{"slug":"getting-started-neexo-ai","category":"cookbook","data":{"name":"Getting Started with Neexo AI","description":"A practical onboarding path for new Neexo employees using GitHub Copilot, Neexo plugins, model selection, and cost-aware prompting.","author":"NeexoCore","difficulty":"beginner","tags":["onboarding","copilot","neexo"],"featured":true},"content":"\n## Prerequisites\n\n- VS Code with GitHub Copilot enabled\n- Access to the NeexoCore GitHub organization\n- Access to the relevant project repositories\n\n## Install the Marketplace\n\nRegister this marketplace once:\n\n```bash\ncopilot plugin marketplace add NeexoCore/awesome-neexo\n```\n\nThen install the recommended baseline plugins:\n\n```bash\ncopilot plugin install neexo-developer@awesome-neexo\ncopilot plugin install neexo-guardrails@awesome-neexo\n```\n\n## Daily Workflow\n\nUse autocomplete for routine code. It is still the cheapest and fastest AI workflow.\n\nUse chat or agent mode when you need explanation, planning, review, or multi-file changes. Give the agent a goal, file scope, constraints, and validation expectation.\n\n## Prompt Pattern\n\n```text\nFix the validation bug in <file or folder>.\nOnly inspect related files and tests.\nDo not change UI.\nRun or suggest the smallest relevant validation command.\n```\n","lastUpdated":"2026-05-16T11:43:58+02:00"},{"slug":"setup-mcp-server","category":"cookbook","data":{"name":"Set Up an MCP Server","description":"Step-by-step guide to configuring an MCP server in VS Code for use with Copilot agents.","author":"NeexoCore","difficulty":"beginner","tags":["mcp","setup","beginner"],"featured":true},"content":"\n## Overview\n\nMCP (Model Context Protocol) servers give Copilot agents access to external tools and data sources. This recipe walks you through setting one up.\n\n## Steps\n\n### 1. Choose an MCP server\n\nPopular options:\n- `@github/github-mcp-server` — GitHub API access\n- `@playwright/mcp` — Browser automation\n- `@anthropic/mcp-server-filesystem` — Local file access\n\n### 2. Add to VS Code settings\n\nOpen `.vscode/mcp.json` or your user settings:\n\n```json\n{\n  \"mcp\": {\n    \"servers\": {\n      \"github\": {\n        \"command\": \"npx\",\n        \"args\": [\"-y\", \"@github/github-mcp-server\"],\n        \"env\": {\n          \"GITHUB_PERSONAL_ACCESS_TOKEN\": \"\"\n        }\n      }\n    }\n  }\n}\n```\n\n### 3. Verify the connection\n\n1. Open Copilot Chat\n2. Type `@agent` and check if the MCP tools are listed\n3. Try a command like \"list my open PRs\"\n\n## Tips\n\n- Use workspace-level `.vscode/mcp.json` for project-specific servers\n- Use user settings for servers you want everywhere\n- Never commit tokens — use environment variables\n","lastUpdated":"2026-05-08T13:29:32+02:00"},{"slug":"windows-ai-toolbelt","category":"cookbook","data":{"name":"Windows AI Coding Toolbelt","description":"A Windows setup recipe for installing compact CLI tools that help agents search, inspect, diff, and validate code with less context.","author":"NeexoCore","difficulty":"beginner","tags":["windows","cli","toolbelt"],"featured":true},"content":"\n## Purpose\n\nA small CLI toolbelt helps developers and agents avoid broad repository reads. Tools such as `rg`, `fd`, `jq`, `yq`, `sg`, and `gh` can return precise output that is cheaper to reason about than large file dumps.\n\n## Install\n\nRun in PowerShell:\n\n```powershell\n$tools = @(\n  @{ Name = \"Git\"; Id = \"Git.Git\" },\n  @{ Name = \"GitHub CLI\"; Id = \"GitHub.cli\" },\n  @{ Name = \"ripgrep\"; Id = \"BurntSushi.ripgrep.MSVC\" },\n  @{ Name = \"fd\"; Id = \"sharkdp.fd\" },\n  @{ Name = \"jq\"; Id = \"jqlang.jq\" },\n  @{ Name = \"yq\"; Id = \"MikeFarah.yq\" },\n  @{ Name = \"bat\"; Id = \"sharkdp.bat\" },\n  @{ Name = \"git-delta\"; Id = \"dandavison.delta\" },\n  @{ Name = \"fzf\"; Id = \"junegunn.fzf\" },\n  @{ Name = \"ast-grep\"; Id = \"ast-grep.ast-grep\" },\n  @{ Name = \"tokei\"; Id = \"XAMPPRocky.Tokei\" },\n  @{ Name = \"eza\"; Id = \"eza-community.eza\" },\n  @{ Name = \"zoxide\"; Id = \"ajeetdsouza.zoxide\" },\n  @{ Name = \"PowerShell 7\"; Id = \"Microsoft.PowerShell\" },\n  @{ Name = \"Windows Terminal\"; Id = \"Microsoft.WindowsTerminal\" }\n)\n\nforeach ($tool in $tools) {\n  winget install --id $tool.Id --exact --accept-source-agreements --accept-package-agreements\n}\n```\n\nIf a package ID changes, run `winget search <tool>` and install the official package.\n\n## Verify\n\n```powershell\ngit --version\ngh --version\nrg --version\nfd --version\njq --version\nyq --version\nbat --version\ndelta --version\nfzf --version\nsg --version\ntokei --version\neza --version\nzoxide --version\npwsh --version\nwt --version\n```\n\n## Configure Delta\n\n```powershell\ngit config --global core.pager delta\ngit config --global interactive.diffFilter \"delta --color-only\"\ngit config --global delta.navigate true\ngit config --global merge.conflictstyle zdiff3\n```\n\n## Agent Rules\n\nAgents may install missing approved toolbelt tools with `winget`, verify them with `--version`, and continue. They must ask before modifying PowerShell profile files.\n\nDo not run destructive commands.\n","lastUpdated":"2026-05-16T11:43:58+02:00"},{"slug":"ai-context-template","category":"cookbook","data":{"name":"AI Context Template","description":"A short project-context template that helps agents understand purpose, stack, high-risk areas, validation commands, and files to avoid.","author":"NeexoCore","difficulty":"beginner","tags":["context","onboarding","guardrails"],"featured":false},"content":"\n## Purpose\n\nEach active repository should have a short `docs/AI_CONTEXT.md` file so agents do not rediscover basic project facts every session.\n\n## Recommended Sections\n\n- Project purpose\n- Tech stack\n- Important folders\n- High-risk areas\n- Files and folders to avoid by default\n- Validation commands\n- Known constraints\n- Manual verification needed\n\n## Keep It Short\n\nOne to two pages is enough. Long architecture manuals should be separate docs that agents open only when needed.\n","lastUpdated":"2026-05-16T11:43:58+02:00"},{"slug":"auto-plugin-updates","category":"cookbook","data":{"name":"Check for Plugin Updates","description":"A lightweight workflow pattern for checking Neexo Copilot plugin versions because Copilot plugins do not update automatically in consumer repositories.","author":"NeexoCore","difficulty":"intermediate","tags":["plugins","automation","github-actions"],"featured":false},"content":"\n## Why This Exists\n\nCopilot plugins are updated manually with:\n\n```bash\ncopilot plugin update <plugin-name>\n```\n\nConsumer repositories do not automatically receive plugin updates.\n\n## Weekly Check Pattern\n\nAdd a scheduled workflow that fetches the marketplace and prints available versions. Teams can extend it to open an issue or PR.\n\n```yaml\nname: Check Copilot Plugin Updates\n\non:\n  schedule:\n    - cron: '0 9 * * 1'\n  workflow_dispatch:\n\njobs:\n  check:\n    runs-on: ubuntu-latest\n    steps:\n      - name: Fetch Neexo marketplace\n        run: |\n          curl -fsSL https://raw.githubusercontent.com/NeexoCore/awesome-neexo/main/.github/plugin/marketplace.json \\\n            | jq -r '.plugins[] | \"\\(.name) \\(.version)\"'\n```\n\n## Team Process\n\nWhen a plugin update matters, update the local installed plugin with `copilot plugin update`, then verify Copilot Chat Diagnostics in VS Code.\n","lastUpdated":"2026-05-16T11:43:58+02:00"},{"slug":"cli-vs-mcp","category":"cookbook","data":{"name":"CLI vs MCP: Choosing the Right Tool","description":"A practical guide for choosing local CLIs before MCP servers when they reduce context, cost, and setup overhead.","author":"NeexoCore","difficulty":"beginner","tags":["cli","mcp","cost"],"featured":false},"content":"\n## Short Version\n\nPrefer CLI first. Use MCP when it adds real value.\n\n## Why CLI First\n\nA good CLI is compact and discoverable. An agent can run `tool --help`, inspect a specific subcommand, and request narrowly scoped output. That usually costs less context than loading broad docs or connecting a general-purpose MCP server.\n\n## Use CLI When\n\n- the CLI is already installed\n- the task maps to one or a few commands\n- output can be filtered or formatted\n- help text is enough to understand the tool\n\n## Use MCP When\n\n- the task needs structured remote API access\n- the agent must coordinate several remote operations\n- the CLI lacks the capability\n- the MCP server exposes domain-specific tools that reduce manual command glue\n\n## Example\n\nUse `gh` for routine GitHub work:\n\n```bash\ngh pr view 123 --repo NeexoCore/awesome-neexo\ngh issue list --repo NeexoCore/awesome-neexo --label bug\ngh run list --repo NeexoCore/awesome-neexo\n```\n\nUse GitHub MCP when the agent needs broader GitHub context or repeated structured API calls.\n","lastUpdated":"2026-05-16T11:43:58+02:00"},{"slug":"content-exclusions-guide","category":"cookbook","data":{"name":"Copilot Content Exclusions Guide","description":"A practical guide to keeping generated output, secrets, dependencies, large media, and 3D assets out of AI context by default.","author":"NeexoCore","difficulty":"beginner","tags":["content-exclusion","security","guardrails"],"featured":false},"content":"\n## Recommended Exclusions\n\nConfigure common exclusions at organization level when possible:\n\n```yaml\n- \"/**/node_modules/**\"\n- \"/**/.next/**\"\n- \"/**/dist/**\"\n- \"/**/build/**\"\n- \"/**/coverage/**\"\n- \"*.log\"\n- \"*.lock\"\n- \"*.glb\"\n- \"*.gltf\"\n- \"*.bin\"\n- \"*.hdr\"\n- \"*.exr\"\n- \"/**/.env\"\n- \"/**/.env.*\"\n```\n\n## Limitations\n\nContent exclusions do not replace repository hygiene. Keep secrets out of repositories, keep generated assets out of routine agent workflows, and keep instructions explicit about what not to read.\n","lastUpdated":"2026-05-16T11:43:58+02:00"},{"slug":"figma-to-code-workflow","category":"cookbook","data":{"name":"Figma-to-Code Workflow","description":"Step-by-step recipe for turning Figma designs into production code using the official Figma MCP server, Code Connect, and AI agent rules.","author":"NeexoCore","difficulty":"intermediate","tags":["figma","design-to-code","workflow","mcp"],"featured":false},"content":"\n## Prerequisites\n\n- Figma account with Dev or Full seat (paid plan) for unlimited tool calls\n- AI coding agent with MCP support (VS Code + Copilot, Cursor, Claude Code)\n- Figma MCP server connected (`https://mcp.figma.com/mcp`)\n\n## Step 1: Prepare Your Figma File\n\nGood Figma structure = good generated code.\n\n- **Use components** for every reusable element (buttons, inputs, cards)\n- **Use variables** for all tokens: colors, spacing, border-radius, typography\n- **Name layers semantically**: `HeroSection`, `PricingCard`, not `Frame 47`\n- **Apply Auto Layout** to communicate responsive intent\n- **Add annotations** for behavior (hover states, transitions, conditional visibility)\n\n## Step 2: Set Up Code Connect (Optional but Recommended)\n\nCode Connect maps Figma components to your actual codebase components. Without it, the AI guesses which component to use.\n\n```bash\n# Install Code Connect CLI\nnpm install -g @figma/code-connect\n\n# Initialize in your project\nfigma connect init\n\n# Publish component mappings\nfigma connect publish\n```\n\nThis creates a mapping file that tells the Figma MCP server: \"When you see this Figma component, use `<Button>` from `@/components/ui/button`\".\n\n## Step 3: Add Agent Rules\n\nCreate a rules file for your IDE to enforce consistent Figma-to-code workflows:\n\n```markdown\n---\ndescription: Figma MCP server rules\napplyTo: \"**/*\"\n---\n\n## Required flow for every Figma implementation\n\n1. Run `get_design_context` first to fetch the structured representation\n2. If response is truncated, run `get_metadata` then re-fetch specific nodes\n3. Run `get_screenshot` for visual reference\n4. Only after both context and screenshot, download assets and implement\n5. Translate output into this project's tokens and conventions\n6. Validate against Figma screenshot for 1:1 parity\n\n## Asset rules\n\n- Use localhost sources directly when the MCP server returns them\n- Never import new icon packages — all assets come from the Figma payload\n- Never create placeholder images if a source URL is provided\n\n## Code rules\n\n- Use project design tokens, not hardcoded values\n- Reuse existing components from the design system\n- Follow existing routing and state management patterns\n- Place generated components in the correct directory\n```\n\n## Step 4: Implementation Workflow\n\n### Small component (button, card, input)\n\n```\nPrompt: \"Implement this Figma component as a React component:\n[paste Figma link]\nUse our design tokens from @/lib/tokens and place it in src/components/ui/\"\n```\n\n### Full page\n\nBreak the page into sections:\n\n1. Copy the link to the **header section** → generate\n2. Copy the link to the **hero section** → generate\n3. Copy the link to the **content grid** → generate\n4. Compose them in a page component\n\n### Extract tokens only\n\n```\nPrompt: \"Get the variable definitions from this Figma file:\n[paste Figma link]\nOutput them as CSS custom properties matching our existing token format\"\n```\n\n## Step 5: Design System Sync\n\nFor ongoing projects, set up a periodic sync:\n\n1. **Audit compliance**: Use the `ds-compliance-audit` community skill to check your Figma file for token/component violations\n2. **Export tokens**: Extract variables via `get_variable_defs` and compare with your codebase tokens\n3. **Update Code Connect**: After adding new components, run `figma connect publish` to keep mappings current\n\n## Common Pitfalls\n\n- **Selecting too much at once**: Large selections time out or produce incomplete results. Select individual components.\n- **Not using variables**: If your Figma file uses hardcoded colors, the AI will hardcode colors too.\n- **Ignoring Code Connect**: Without it, every `<Button>` in Figma might generate a new custom button component instead of reusing yours.\n- **Skipping the screenshot step**: The screenshot provides visual context that the structured data alone may miss (shadows, gradients, exact spacing).\n","lastUpdated":"2026-05-16T13:04:56+02:00"},{"slug":"github-admin-ai-setup","category":"cookbook","data":{"name":"GitHub Admin AI Setup","description":"A checklist for configuring GitHub Copilot organization settings, content exclusions, model access, budgets, and review cadence for Neexo.","author":"NeexoCore","difficulty":"intermediate","tags":["github","admin","copilot"],"featured":false},"content":"\n## Scope\n\nRepository files can guide agents, but GitHub organization settings must enforce budgets, model access, and content exclusion.\n\n## Checklist\n\n- Confirm the active Copilot plan and usage-based billing settings\n- Configure organization-level content exclusions\n- Configure repository-level exclusions for generated output and assets where needed\n- Restrict expensive models by default\n- Review cloud agent, third-party agent, Spark, Spaces, and MCP policies\n- Review usage CSV monthly\n- Investigate outlier sessions and unusually expensive model usage\n\n## Content Exclusion Starter\n\nSee the [Copilot Content Exclusions Guide](/cookbook/content-exclusions-guide) for a complete exclusion list with explanations. At minimum, configure these at the organization level:\n\n```yaml\n- \"/**/node_modules/**\"\n- \"/**/.next/**\"\n- \"/**/dist/**\"\n- \"/**/.env\"\n- \"/**/.env.*\"\n```\n","lastUpdated":"2026-05-16T12:39:57+02:00"},{"slug":"secrets-management","category":"cookbook","data":{"name":"Secrets Management (dotenvx & Infisical)","description":"How to configure encrypted secrets for Neexo projects using dotenvx or Infisical, replacing manual .env file sharing.","author":"NeexoCore","difficulty":"beginner","tags":["secrets","security","dotenvx","infisical"],"featured":false},"content":"\n## Overview\n\nNeexo projects use encrypted secrets instead of sharing raw `.env` files. Two approaches are supported depending on the project.\n\n## Option 1: dotenvx (Encrypted .env in Git)\n\nUsed by: neexo-configurator, neexo-internal\n\nSecrets are encrypted and committed to git. A separate `.env.keys` file (shared securely between devs) decrypts them at runtime.\n\n### Setup\n\n```bash\n# Install dotenvx\nnpm install -g @dotenvx/dotenvx\n\n# Encrypt an .env file\ndotenvx encrypt .env.development\n\n# Start dev server with decryption\ndotenvx run -- next dev\n```\n\n### How It Works\n\n1. `.env.development` and `.env.production` are encrypted and committed to git\n2. `.env.keys` contains the decryption keys — **never commit this file**\n3. `dotenvx run` decrypts and injects env vars at runtime\n4. Fallback: `.env.local` still works if `.env.keys` is unavailable\n\n### First-Time Developer Setup\n\n1. Get the `.env.keys` file from a team member (share via secure channel)\n2. Place it in the repo root\n3. Run `pnpm dev` — the `predev` hook uses dotenvx automatically\n\n## Option 2: Infisical (Cloud Secrets Manager)\n\nUsed by: neexo-homepage\n\nSecrets are stored in Infisical cloud (EU region) and injected at runtime via CLI.\n\n### Setup\n\n```bash\n# Install Infisical CLI\nwinget install infisical\n\n# Authenticate (opens browser)\ninfisical login --domain https://eu.infisical.com/api\n\n# Start dev server with secrets injection\nnpm run dev:infisical\n# Equivalent to: infisical run --domain ... --env=dev -- next dev\n```\n\n### How It Works\n\n1. `.infisical.json` in repo root links to the correct Infisical project\n2. `infisical run` injects env vars from the cloud at runtime\n3. Fallback: copy `.env.example` to `.env.local` and fill in values manually\n\n## Rules\n\n- Never commit `.env.keys` or `.env.local` to git\n- Never share secrets via Slack, email, or other unencrypted channels\n- Use `dotenvx encrypt` after adding new env vars to encrypted files\n- Use Infisical's dashboard to manage rotation and access control\n- Always provide `.env.example` with placeholder values for documentation\n\n## Package.json Scripts\n\n```json\n{\n  \"scripts\": {\n    \"dev\": \"dotenvx run -- next dev\",\n    \"dev:infisical\": \"infisical run --domain https://eu.infisical.com/api --env=dev -- next dev\"\n  }\n}\n```\n","lastUpdated":"2026-05-16T12:55:32+02:00"},{"slug":"nextjs-16-upgrade","category":"cookbook","data":{"name":"Upgrade a Neexo Site to Next.js 16","description":"A migration checklist for moving Neexo Next.js apps to Next.js 16, React 19, async route params, and updated caching.","author":"NeexoCore","difficulty":"intermediate","tags":["nextjs","upgrade","react"],"featured":false},"content":"\n## Prerequisites\n\n- Node.js 18.18+ (recommended: 20+)\n- Current app on Next.js 14 or 15\n\n## Migration Steps\n\n### 1. Upgrade Dependencies\n\n```bash\nnpm install next@latest react@latest react-dom@latest\nnpm install -D eslint-config-next@latest @types/react@latest @types/react-dom@latest\n```\n\n### 2. Run the Next.js Codemod\n\n```bash\nnpx @next/codemod@latest upgrade\n```\n\nThis handles many automatic transformations including async API migrations.\n\n### 3. Fix Async Route APIs\n\n`params`, `searchParams`, `cookies()`, and `headers()` are now async. Update all route components:\n\n```tsx\n// Before (Next.js 14/15)\nexport default function Page({ params }: { params: { slug: string } }) {\n  return <div>{params.slug}</div>\n}\n\n// After (Next.js 16)\nexport default async function Page({ params }: { params: Promise<{ slug: string }> }) {\n  const { slug } = await params\n  return <div>{slug}</div>\n}\n```\n\n### 4. Update cookies() and headers()\n\n```tsx\n// Before\nconst cookieStore = cookies()\nconst token = cookieStore.get('token')\n\n// After\nconst cookieStore = await cookies()\nconst token = cookieStore.get('token')\n```\n\n### 5. Review Caching Changes\n\n- Next.js 16 no longer caches `fetch()` by default — add `{ cache: 'force-cache' }` where you relied on caching\n- Use `\"use cache\"` directive for fine-grained caching instead of route segment config\n- Review `generateStaticParams` behavior — dynamic pages are no longer force-static by default\n\n### 6. Build and Test\n\n```bash\nnpm run build\nnpx playwright test\n```\n\n### 7. Deploy and Monitor\n\n- Check deployment logs after the first production deploy\n- Watch for hydration mismatches caused by client/server content differences\n- Monitor Core Web Vitals for regressions\n\n## Common Issues\n\n- **TypeScript errors on params**: ensure you're using `Promise<>` wrapper and `await`\n- **Hydration mismatches**: often caused by `ThemeToggle` or other client components that read browser state — use `useSyncExternalStore` for safe SSR\n- **Missing data**: check if you relied on default `fetch` caching that is now opt-in\n","lastUpdated":"2026-05-16T12:39:57+02:00"},{"slug":"effective-instructions","category":"cookbook","data":{"name":"Write Effective Instructions","description":"Tips and patterns for writing .instructions.md files that consistently produce high-quality Copilot output.","author":"NeexoCore","difficulty":"beginner","tags":["instructions","best-practices","beginner"],"featured":false},"content":"\n## Overview\n\n`.instructions.md` files control how Copilot generates code for specific file patterns. Well-written instructions dramatically improve output quality.\n\n## Key Principles\n\n### Be specific, not generic\n\n```markdown\n# Bad\nWrite clean code.\n\n# Good\nUse early returns to avoid nesting. Max function length: 30 lines.\nNever use `any` — use `unknown` and narrow with type guards.\n```\n\n### Use applyTo patterns\n\nTarget instructions to specific file types:\n\n```yaml\napplyTo:\n  - \"**/*.test.ts\"\n  - \"**/*.spec.ts\"\n```\n\n### Include examples\n\n```markdown\n## Naming\n\n- Components: PascalCase (`UserProfile.tsx`)\n- Hooks: camelCase with `use` prefix (`useAuth.ts`)\n- Utils: camelCase (`formatDate.ts`)\n```\n\n### State what NOT to do\n\n```markdown\n## Don'ts\n\n- Don't add comments explaining obvious code\n- Don't use default exports\n- Don't use enums — use const objects with `as const`\n```\n\n## Template\n\n```markdown\n---\nname: \"My Standard\"\napplyTo:\n  - \"**/*.ts\"\n---\n\n## Rules\n\n1. Rule one\n2. Rule two\n\n## Examples\n\n### Good\n\\`\\`\\`ts\n// example\n\\`\\`\\`\n\n### Bad\n\\`\\`\\`ts\n// counter-example\n\\`\\`\\`\n```\n","lastUpdated":"2026-05-08T13:29:32+02:00"}]}]}