` where semantic elements are appropriate
+- Missing `main`, `nav`, `header`, `footer` landmarks
+- Lists (`
`, ``) not used for list content
+- Missing `lang` attribute on ``
+
+**Impact:** Screen reader users cannot navigate efficiently
+
+**Remediation:**
+
+- Maintain logical heading order (don't skip levels)
+- Use semantic HTML5 elements
+- Add ARIA landmarks if semantic HTML not possible
+- Wrap list items in proper list elements
+- Add `lang` attribute: ``
+
+### Category 6: ARIA Usage (WCAG A - Medium)
+
+**Detection:**
+
+- ARIA attributes on semantic HTML (redundant)
+- Invalid ARIA attribute values
+- `aria-label` or `aria-labelledby` missing on custom components
+- `role="presentation"` misused
+- `aria-hidden="true"` on focusable elements
+
+**Impact:** Screen readers receive incorrect or confusing information
+
+**Remediation:**
+
+- Remove redundant ARIA on semantic HTML
+- Validate ARIA values against spec
+- Add proper labels to custom interactive components
+- Use `role="presentation"` only for layout tables/images
+- Ensure hidden elements are not focusable
+
+### Category 7: Media Accessibility (WCAG A - High)
+
+**Detection:**
+
+- `` without captions/subtitles
+- `` without transcripts
+- Autoplay media without user control
+- Missing media controls
+
+**Impact:** Deaf/hard-of-hearing users cannot access audio content
+
+**Remediation:**
+
+- Add `` elements for captions (WebVTT)
+- Provide transcript links for audio
+- Remove `autoplay` or add `muted` attribute
+- Ensure native controls are enabled or custom controls are accessible
+
+### Category 8: Dynamic Content (WCAG A - Medium)
+
+**Detection:**
+
+- Content updates without `aria-live` regions
+- Focus not managed during route changes
+- Infinite scroll without keyboard alternatives
+- Loading states not announced
+
+**Impact:** Screen reader users miss dynamic updates
+
+**Remediation:**
+
+- Add `aria-live="polite"` for non-critical updates
+- Use `aria-live="assertive"` for critical updates
+- Manage focus on route/content changes
+- Provide "Load More" button alternative
+- Use `aria-busy="true"` during loading
+
+### Category 9: Mobile Accessibility (WCAG AA - Medium)
+
+**Detection:**
+
+- Touch targets < 44x44px
+- Viewport zoom disabled (`user-scalable=no`)
+- Horizontal scrolling required on mobile
+- Content not responsive to text resize
+
+**Impact:** Users with motor disabilities or low vision struggle on mobile
+
+**Remediation:**
+
+- Increase touch target sizes to 44x44px minimum
+- Remove viewport zoom restrictions
+- Implement responsive design
+- Test with 200% text zoom
+
+### Category 10: Skip Links & Navigation (WCAG A - Medium)
+
+**Detection:**
+
+- Missing "Skip to main content" link
+- Skip links not keyboard accessible
+- Multiple navigation menus without labels
+- Breadcrumbs without proper markup
+
+**Impact:** Keyboard users must tab through navigation repeatedly
+
+**Remediation:**
+
+- Add skip link as first focusable element
+- Ensure skip link is visible on focus
+- Add `aria-label` to multiple `nav` elements
+- Use `` with proper ARIA for breadcrumbs
+
+## GitHub Issue Template
+
+```markdown
+## Accessibility Issue: [Brief Description]
+
+**WCAG Level:** [A/AA/AAA]
+**Severity:** [Critical/High/Medium/Low]
+**Category:** [Category Name]
+
+### Issue Description
+[Clear explanation of the accessibility violation and why it matters]
+
+### User Impact
+- **Affected Users:** [Blind/Low Vision/Deaf/Motor Disability/Cognitive/etc.]
+- **Severity:** [What functionality is blocked or degraded]
+
+### Violations Found
+
+#### File: `[path/to/file.jsx]`
+**Lines:** [line numbers]
+```[language]
+
+[problematic code snippet]
+```
+
+**Issue:** [Specific problem with this code]
+
+---
+### Recommended Fix
+```[language]
+
+[corrected code snippet]
+```
+
+**Changes Made:**
+1. [Specific change 1]
+2. [Specific change 2]
+
+---
+### Additional Instances
+[If multiple files affected, list them here]
+
+- `file1.jsx` (line 45)
+- `file2.tsx` (line 120)
+- `file3.html` (line 89)
+
+### Testing Instructions
+1. [Step-by-step testing with screen reader]
+2. [Keyboard navigation testing]
+3. [Color contrast verification]
+4. [Tool to use: WAVE, axe DevTools, Lighthouse]
+
+### Resources
+- [WCAG Success Criterion link]
+- [MDN documentation link]
+- [WebAIM article link]
+
+### Acceptance Criteria
+- [ ] Code updated per recommendations
+- [ ] Tested with screen reader ([specify: NVDA/JAWS/VoiceOver])
+- [ ] Keyboard navigation works as expected
+- [ ] Automated tests pass (Lighthouse/axe)
+- [ ] Manual testing completed
+
+---
+
+
+```
+
+## Commit Message Format
+
+When fixing accessibility issues:
+
+```
+fix(a11y): [Brief description of fix]
+
+- Add alt text to product images (Issue #123)
+- Implement keyboard navigation for modal
+- Meets WCAG [Level] [Criterion]
+
+WCAG: [Success Criterion Number]
+Severity: [Critical/High/Medium/Low]
+
+```
+
+## HTML Comment Marker Format
+
+```html
+
+[code that needs fixing]
+
+
+```
+
+For fixed issues:
+
+```html
+
+[corrected code]
+
+
+```
+
+## Tools Integration
+
+### Required Tools
+
+- **GitHub API:** For issue creation and label management
+- **File System Access:** To scan and mark files
+
+### Recommended Testing Tools (reference in issues)
+
+- Chrome DevTools Lighthouse
+- axe DevTools browser extension
+- WAVE Web Accessibility Evaluation Tool
+- WebAIM Contrast Checker
+- Screen readers: NVDA (Windows), JAWS, VoiceOver (macOS/iOS)
+
+## Automated Checks
+
+Run the following automated checks during scan:
+
+1. Missing alt attributes on images
+2. Form inputs without labels
+3. Buttons/links without accessible names
+4. Heading hierarchy violations
+5. Missing ARIA labels on custom components
+6. Color contrast issues (if tools available)
+7. Missing lang attribute
+8. HTML validation errors
+
+## Reporting
+
+After completing the scan, create a summary comment in the weekly digest (if available) or as a standalone GitHub Discussion:
+
+```markdown
+# Accessibility Scan Results - [Date]
+
+## Summary
+- **Total Issues Found:** [number]
+- **Critical:** [number]
+- **High:** [number]
+- **Medium:** [number]
+- **Low:** [number]
+
+## Issues by WCAG Level
+- **Level A:** [number] issues
+- **Level AA:** [number] issues
+- **Level AAA:** [number] issues
+
+## New Issues Created
+[Links to GitHub issues]
+
+## Previously Tracked Issues
+[Status updates on existing accessibility issues]
+
+## Recommendations
+[Priority fixes based on severity and user impact]
+
+---
+
+
+```
+
+## Best Practices
+
+1. **Group Similar Issues:** Create one issue for multiple instances of the same problem
+2. **Prioritize Critical Path:** Focus on issues affecting core user journeys first
+3. **Provide Context:** Explain why each fix improves accessibility, not just what to change
+4. **Include Testing Steps:** Make fixes verifiable with specific testing instructions
+5. **Reference Standards:** Link to WCAG success criteria and documentation
+6. **Progressive Enhancement:** Suggest fixes that work across all browsers and assistive technologies
+
+## Notes
+
+- This agent focuses on **detectable** accessibility issues; manual testing with real assistive technologies is still required
+- Some issues (like semantic appropriateness) require human judgment
+- Color contrast can only be checked if you have access to rendered styles
+- Regular scans help catch regressions as code evolves
+- Consider running after major UI changes or before releases
\ No newline at end of file
diff --git a/.continue/checks/agentsmd-updater.md b/.continue/checks/agentsmd-updater.md
new file mode 100644
index 0000000000..b22f5b4a53
--- /dev/null
+++ b/.continue/checks/agentsmd-updater.md
@@ -0,0 +1,5 @@
+---
+name: agentsmd-updater
+---
+
+You are maintaining the project's AGENTS.md file. Review the pull request and identify new build steps, scripts, directory changes, dependencies, environment variables, architectures, code style rules, or workflows that an AI coding agent should know. Compare these findings with the existing AGENTS.md and update the file so it stays accurate, complete, and practical for automated agents. Keep the structure clean and keep explanations brief. If the file is missing you should create one. Do not modify any other file.
\ No newline at end of file
diff --git a/.github/workflows/codeql.yml b/.github/workflows/codeql.yml
new file mode 100644
index 0000000000..ebbb37ee58
--- /dev/null
+++ b/.github/workflows/codeql.yml
@@ -0,0 +1,101 @@
+# For most projects, this workflow file will not need changing; you simply need
+# to commit it to your repository.
+#
+# You may wish to alter this file to override the set of languages analyzed,
+# or to provide custom queries or build logic.
+#
+# ******** NOTE ********
+# We have attempted to detect the languages in your repository. Please check
+# the `language` matrix defined below to confirm you have the correct set of
+# supported CodeQL languages.
+#
+name: "CodeQL Advanced"
+
+on:
+ push:
+ branches: [ "main" ]
+ pull_request:
+ branches: [ "main" ]
+ schedule:
+ - cron: '40 8 * * 6'
+
+jobs:
+ analyze:
+ name: Analyze (${{ matrix.language }})
+ # Runner size impacts CodeQL analysis time. To learn more, please see:
+ # - https://gh.io/recommended-hardware-resources-for-running-codeql
+ # - https://gh.io/supported-runners-and-hardware-resources
+ # - https://gh.io/using-larger-runners (GitHub.com only)
+ # Consider using larger runners or machines with greater resources for possible analysis time improvements.
+ runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
+ permissions:
+ # required for all workflows
+ security-events: write
+
+ # required to fetch internal or private CodeQL packs
+ packages: read
+
+ # only required for workflows in private repositories
+ actions: read
+ contents: read
+
+ strategy:
+ fail-fast: false
+ matrix:
+ include:
+ - language: actions
+ build-mode: none
+ - language: python
+ build-mode: none
+ # CodeQL supports the following values keywords for 'language': 'actions', 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'rust', 'swift'
+ # Use `c-cpp` to analyze code written in C, C++ or both
+ # Use 'java-kotlin' to analyze code written in Java, Kotlin or both
+ # Use 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
+ # To learn more about changing the languages that are analyzed or customizing the build mode for your analysis,
+ # see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning.
+ # If you are analyzing a compiled language, you can modify the 'build-mode' for that language to customize how
+ # your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
+ steps:
+ - name: Checkout repository
+ uses: actions/checkout@v4
+
+ # Add any setup steps before running the `github/codeql-action/init` action.
+ # This includes steps like installing compilers or runtimes (`actions/setup-node`
+ # or others). This is typically only required for manual builds.
+ # - name: Setup runtime (example)
+ # uses: actions/setup-example@v1
+
+ # Initializes the CodeQL tools for scanning.
+ - name: Initialize CodeQL
+ uses: github/codeql-action/init@v4
+ with:
+ languages: ${{ matrix.language }}
+ build-mode: ${{ matrix.build-mode }}
+ # If you wish to specify custom queries, you can do so here or in a config file.
+ # By default, queries listed here will override any specified in a config file.
+ # Prefix the list here with "+" to use these queries and those in the config file.
+
+ # For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
+ # queries: security-extended,security-and-quality
+
+ # If the analyze step fails for one of the languages you are analyzing with
+ # "We were unable to automatically build your code", modify the matrix above
+ # to set the build mode to "manual" for that language. Then modify this step
+ # to build your code.
+ # âšī¸ Command-line programs to run using the OS shell.
+ # đ See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
+ - name: Run manual build steps
+ if: matrix.build-mode == 'manual'
+ shell: bash
+ run: |
+ echo 'If you are using a "manual" build mode for one or more of the' \
+ 'languages you are analyzing, replace this with the commands to build' \
+ 'your code, for example:'
+ echo ' make bootstrap'
+ echo ' make release'
+ exit 1
+
+ - name: Perform CodeQL Analysis
+ uses: github/codeql-action/analyze@v4
+ with:
+ category: "/language:${{matrix.language}}"
diff --git a/SECURITY.md b/SECURITY.md
new file mode 100644
index 0000000000..034e848032
--- /dev/null
+++ b/SECURITY.md
@@ -0,0 +1,21 @@
+# Security Policy
+
+## Supported Versions
+
+Use this section to tell people about which versions of your project are
+currently being supported with security updates.
+
+| Version | Supported |
+| ------- | ------------------ |
+| 5.1.x | :white_check_mark: |
+| 5.0.x | :x: |
+| 4.0.x | :white_check_mark: |
+| < 4.0 | :x: |
+
+## Reporting a Vulnerability
+
+Use this section to tell people how to report a vulnerability.
+
+Tell them where to go, how often they can expect to get an update on a
+reported vulnerability, what to expect if the vulnerability is accepted or
+declined, etc.
diff --git a/examples/high_level_api/legion_slim5_rtx4060.py b/examples/high_level_api/legion_slim5_rtx4060.py
new file mode 100644
index 0000000000..90a00454cf
--- /dev/null
+++ b/examples/high_level_api/legion_slim5_rtx4060.py
@@ -0,0 +1,223 @@
+"""
+Optimized llama-cpp-python configuration for:
+ Lenovo Legion Slim 5 (16" RH8)
+ - CPU: Intel Core i7-13700H (6P + 8E cores)
+ - GPU: NVIDIA GeForce RTX 4060 Laptop (8 GB VRAM, GDDR6)
+ - RAM: 16 GB DDR5-5200
+ - SSD: 1 TB NVMe
+
+Install with CUDA support first:
+
+ Bash / Linux / macOS:
+ CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python --force-reinstall --no-cache-dir
+
+ PowerShell (Windows):
+ $env:CMAKE_ARGS = "-DGGML_CUDA=on"
+ python -m pip install llama-cpp-python --force-reinstall --no-cache-dir
+
+ Tip (Windows): install into a virtual environment to avoid dependency conflicts
+ with other tools in your global environment:
+ python -m venv .venv-llama
+ ./.venv-llama/Scripts/Activate.ps1
+ $env:CMAKE_ARGS = "-DGGML_CUDA=on"
+ python -m pip install llama-cpp-python --force-reinstall --no-cache-dir
+"""
+
+import argparse
+import json
+import os
+import sys
+
+from llama_cpp import Llama
+
+# ---------------------------------------------------------------------------
+# Hardware constants for this machine
+# ---------------------------------------------------------------------------
+VRAM_GB = 8 # RTX 4060 Laptop VRAM
+N_PHYSICAL_CORES = 6 # P-cores only (best single-thread perf on i7-13700H)
+
+# ---------------------------------------------------------------------------
+# Recommended quantisation levels (pick one based on your model size)
+# ---------------------------------------------------------------------------
+# Model 7B / 8B:
+# Q5_K_M â ~5.5 GB VRAM â
recommended
+# Q6_K â ~6.5 GB VRAM â
excellent quality
+# Q8_0 â ~8.5 GB VRAM â ī¸ tight fit, may spill to CPU RAM
+#
+# Model 13B:
+# Q4_K_M â ~7.5 GB VRAM â
fits
+# Q5_K_M â ~9.0 GB VRAM â exceeds VRAM
+
+
+def build_llm(
+ model_path: str,
+ n_ctx: int = 4096,
+ n_gpu_layers: int = -1, # -1 = offload all layers to GPU
+ n_batch: int = 512,
+ verbose: bool = False,
+) -> Llama:
+ """
+ Create a Llama instance tuned for the Legion Slim 5 / RTX 4060 laptop.
+
+ Args:
+ model_path: Path to the .gguf model file.
+ n_ctx: Context window size (tokens). 4096 is safe for 8 GB VRAM.
+ n_gpu_layers: Number of transformer layers to offload to the GPU.
+ Use -1 to offload everything (default). Reduce if you
+ see CUDA out-of-memory errors.
+ n_batch: Batch size for prompt evaluation.
+ verbose: Print llama.cpp loading messages.
+
+ Returns:
+ A ready-to-use Llama instance.
+ """
+ return Llama(
+ model_path=model_path,
+ # --- GPU offload ---
+ n_gpu_layers=n_gpu_layers, # RTX 4060 has 8 GB â offload as much as fits
+ offload_kqv=True, # keep KV-cache on GPU for faster inference
+ # --- CPU threads ---
+ n_threads=N_PHYSICAL_CORES, # use P-cores only for best throughput
+ n_threads_batch=N_PHYSICAL_CORES,
+ # --- Context / batching ---
+ n_ctx=n_ctx,
+ n_batch=n_batch,
+ # --- Memory ---
+ use_mmap=True, # fast model loading from NVMe SSD
+ use_mlock=False, # don't pin 16 GB RAM â OS needs headroom
+ # --- Misc ---
+ verbose=verbose,
+ )
+
+
+def main() -> None:
+ parser = argparse.ArgumentParser(
+ description="Run inference optimised for the Lenovo Legion Slim 5 / RTX 4060"
+ )
+ parser.add_argument(
+ "-m", "--model",
+ required=True,
+ help="Path to the .gguf model file (e.g. mistral-7b-Q5_K_M.gguf)",
+ )
+ parser.add_argument(
+ "-p", "--prompt",
+ default="What are the names of the planets in the solar system?",
+ help="Prompt text",
+ )
+ parser.add_argument(
+ "--system-prompt",
+ default=None,
+ help="Optional system prompt prepended before the user prompt",
+ )
+ parser.add_argument(
+ "--max-tokens", type=int, default=256,
+ help="Maximum number of tokens to generate",
+ )
+ parser.add_argument(
+ "--n-ctx", type=int, default=4096,
+ help="Context window size",
+ )
+ parser.add_argument(
+ "--n-gpu-layers", type=int, default=-1,
+ help="GPU layers to offload (-1 = all)",
+ )
+ parser.add_argument(
+ "--seed", type=int, default=-1,
+ help="RNG seed for reproducible output (-1 = random)",
+ )
+ parser.add_argument(
+ "--temperature", type=float, default=0.8,
+ help="Sampling temperature (0.0 = greedy, higher = more creative)",
+ )
+ parser.add_argument(
+ "--top-p", type=float, default=0.95,
+ help="Nucleus sampling probability threshold",
+ )
+ parser.add_argument(
+ "--repeat-penalty", type=float, default=1.1,
+ help="Penalty applied to repeated tokens (1.0 = disabled)",
+ )
+ parser.add_argument(
+ "--json-output", action="store_true",
+ help="Print only raw JSON output (no banner); useful for piping",
+ )
+ parser.add_argument(
+ "--verbose", action="store_true",
+ help="Print llama.cpp loading messages",
+ )
+ args = parser.parse_args()
+
+ # --- Validate model path -------------------------------------------------
+ model_path = os.path.abspath(args.model)
+ if not os.path.isfile(model_path):
+ print(
+ f"ERROR: model file not found: {model_path}\n"
+ " Make sure the path is correct and the file exists.",
+ file=sys.stderr,
+ )
+ sys.exit(1)
+
+ if not args.json_output:
+ print(f"Loading model: {model_path}")
+ print(f"GPU layers : {'all' if args.n_gpu_layers == -1 else args.n_gpu_layers}")
+ print(f"Context size : {args.n_ctx} tokens\n")
+
+ # --- Load model ----------------------------------------------------------
+ try:
+ llm = build_llm(
+ model_path=model_path,
+ n_ctx=args.n_ctx,
+ n_gpu_layers=args.n_gpu_layers,
+ verbose=args.verbose,
+ )
+ except Exception as exc:
+ err = str(exc)
+ print(f"ERROR: failed to load model â {err}", file=sys.stderr)
+ if args.n_gpu_layers == -1 and (
+ "out of memory" in err.lower() or "cuda" in err.lower()
+ ):
+ print(
+ " Hint: GPU ran out of VRAM while loading all layers.\n"
+ " Try reducing --n-gpu-layers (e.g. --n-gpu-layers 28) to keep\n"
+ " some layers on CPU RAM instead.",
+ file=sys.stderr,
+ )
+ sys.exit(1)
+
+ # --- Build prompt --------------------------------------------------------
+ if args.system_prompt:
+ full_prompt = f"{args.system_prompt}\n\n{args.prompt}"
+ else:
+ full_prompt = args.prompt
+
+ # --- Run inference -------------------------------------------------------
+ try:
+ output = llm(
+ full_prompt,
+ max_tokens=args.max_tokens,
+ stop=["Q:", "\n\n"],
+ echo=True,
+ seed=args.seed,
+ temperature=args.temperature,
+ top_p=args.top_p,
+ repeat_penalty=args.repeat_penalty,
+ )
+ except Exception as exc:
+ err = str(exc)
+ print(f"ERROR: inference failed â {err}", file=sys.stderr)
+ if args.n_gpu_layers == -1 and (
+ "out of memory" in err.lower() or "cuda" in err.lower()
+ ):
+ print(
+ " Hint: GPU ran out of VRAM during inference.\n"
+ " Try reducing --n-gpu-layers (e.g. --n-gpu-layers 28) to keep\n"
+ " some layers on CPU RAM instead.",
+ file=sys.stderr,
+ )
+ sys.exit(1)
+
+ print(json.dumps(output, indent=2, ensure_ascii=False))
+
+
+if __name__ == "__main__":
+ main()