AI Disclosure & Methodology
Last updated: May 3, 2026
Why This Page Exists
Intelloro is an AI tools directory built using AI. To meet our obligations under the EU AI Act (Regulation 2024/1689) and to give every visitor a clear picture of how the platform works, this page documents:
- What AI systems we operate and what they do
- How we have classified each system under the AI Act risk tiers (Articles 6–9)
- What transparency obligations apply (Article 50) and how we meet them
- How we promote AI Literacy among users and staff (Article 4)
- How we rank tools (the “methodology”) so you can interpret our scores
- How we handle errors and incidents
1. AI Systems We Operate
| System | Function | Underlying Model(s) |
|---|---|---|
| Smart Generator (SG) | Extracts tool descriptions, features, pricing, and 70+ structured fields from public vendor websites | Google Gemini (text extraction) |
| Approach 3 Verifier (A3) | Verifies SG outputs against 15+ external sources (G2, Capterra, GitHub, etc.) | Anthropic Claude |
| Score Generator | Generates Task Scores (10 categories), Dimension Scores (6 dimensions), and Trust Score for each tool | Google Gemini + rule-based formulas |
| Decision Engine | Personalizes tool recommendations based on user profile (team size, skill level, use case) | Rule-based + Pinecone vector similarity |
| Smart Router | Routes search queries to the most relevant page (category, goal, comparison, listing) | Pattern matching + Gemini fallback |
| Semantic Search | Returns tools matching free-text queries via embedding similarity | OpenAI / Gemini embeddings + Pinecone |
None of these systems process your personal data. Inputs are public vendor information; outputs are tool metadata and recommendations.
2. EU AI Act Risk Classification (Articles 6–9)
We have reviewed each AI system above against the AI Act's risk tiers and Annex III high-risk categories. Our classification:
| System | Risk Tier | Reason |
|---|---|---|
| Smart Generator | Limited | Generates synthetic text (Art. 50(2)) — transparency required, no high-risk use |
| Approach 3 Verifier | Minimal | Internal verification; no user-facing decisions |
| Score Generator | Limited | Outputs published recommendations (transparency required); not employment, credit, education, or other Annex III high-risk uses |
| Decision Engine | Limited | Recommends consumer software; not high-risk per Annex III |
| Smart Router | Minimal | Internal navigation logic |
| Semantic Search | Minimal | Search ranking, no automated decisions affecting individuals |
None of our systems are classified as high-risk under Annex III (biometric ID, critical infrastructure, education, employment, essential services, law enforcement, migration, or justice administration). If our use case ever expands into a high-risk domain, we will conduct a full conformity assessment per Articles 16 and 43 before deployment.
Prohibited practices (Art. 5): Intelloro does not engage in any AI Act-prohibited practices — no subliminal manipulation, no exploitation of vulnerabilities, no social scoring, no real-time remote biometric ID, no untargeted face-DB scraping, no emotion inference in workplace/education, no biometric categorisation by sensitive attributes.
3. Article 50 — Transparency Obligations
EU AI Act Article 50 requires providers and deployers of AI systems to inform users when they are interacting with AI or consuming AI-generated content. Our compliance:
- AI-Generated Descriptions: tool descriptions on detail pages that were extracted by AI carry a visible “AI-Generated” or “AI-Extracted” label at the point of consumption.
- AI-Generated Scores: Task Scores, Dimension Scores, and Trust Scores are clearly labelled as algorithmically generated, with links to this methodology page from the score badges.
- AI Recommendations: the Decision Engine's recommendations are labelled as “Personalized for you” with the option to view the underlying rule-based logic.
- No Deepfakes / Synthetic Media: Intelloro does not host or generate deepfake images, audio, or video. The site does not produce synthetic media that could be mistaken for real persons.
4. Article 4 — AI Literacy
EU AI Act Article 4 requires providers and deployers to ensure a sufficient level of AI literacy among staff and other persons dealing with the operation and use of AI systems. Our measures:
- Staff training: we are documenting an internal AI Literacy guide covering how each AI system works, its known limitations, how to recognize hallucinations, and the escalation path for AI-related incidents. All team members operating AI systems will be required to acknowledge this guide before working on AI features.
- User education: this page itself is a public-facing AI Literacy resource. Score badges and tool detail pages link here so any user can understand how AI shapes what they see.
- Limitation disclosure: our AI methodology is published openly on this page so users understand AI-extracted data may contain errors.
- Right to verify: we publish source citations and external review-platform links so users can independently verify any AI-generated claim.
5. How We Rank Tools (Methodology)
Trust Score (0–100)
Calculated from 11 verifiable signals: company verification, named customers, total funding, status page presence, uptime SLA, GDPR compliance, SOC 2 certification, HIPAA compliance, data residency disclosure, user count range, and employee count. Each signal is weighted; absent signals reduce the score. The score is recalculated whenever a tool is updated.
Task Scores (1–10 across 10 categories)
Categories: Chatbots, Coding, Image, Writing, Video, Audio, Automation, Data, Design, Customer Support. Each tool is scored using a tier-based system: (Tier 1) G2 per-feature ratings if available; (Tier 2) countable evidence on vendor website (e.g., “250,000+ templates” → high template-variety score); (Tier 3) AI-generated estimate based on category rubric anchors.
Dimension Scores (1–10 across 6 dimensions)
Dimensions: Ease of Use, Output Quality, Value for Money, Customizability, Support Ecosystem, Integration Power. We override dimension scores with G2 / Capterra user ratings whenever available (e.g., G2 Ease of Use 8.9 / 10 → 8.9). When user-rating data is missing, we estimate from rubric anchors.
Search & Recommendation Ranking
Free-text search uses embedding similarity (cosine distance) on tool name + tagline + description. Personalized recommendations (Decision Engine) combine: (1) explicit user preferences from the profile, (2) Trust Score, (3) Task Score for the matching category, (4) the user's industry and team size. Sponsored listings are clearly labelled “Sponsored” and never displace organic ranking based on score.
6. Known Limitations & Errors
All AI systems make mistakes. We disclose the known limitations of ours:
- Tool descriptions may contain factual errors or outdated information — vendors may change features or pricing without our knowledge.
- Compliance flags (GDPR, SOC 2, HIPAA) reflect vendor self-reporting — we do not independently audit these claims.
- Scores can be wrong if the underlying source data is wrong (e.g., an inflated G2 review count).
- The Decision Engine does not understand domain-specific nuance; recommendations should be a starting point, not a final purchase decision.
- External review aggregations (G2, Capterra, Trustpilot) reflect a snapshot in time and may not be current.
If you find an error, please report it via our contact form or use the Report Content page if it is harmful or misleading. Vendors can claim their listing to correct inaccuracies directly.
7. Incident Handling & Reporting (Art. 73)
Should any of our AI systems produce a serious incident (e.g., systematic harm, large-scale misinformation, breach of fundamental rights), we will:
- Take the affected system offline within 24 hours
- Notify affected users without undue delay
- Report to the relevant national competent authority within 15 days (or 2 days for widespread incidents) if AI Act Art. 73 applies once enforcement begins
- Document the root cause, remediation, and lessons learned in our annual Transparency Report
8. Governance & Accountability
AI Lead / Operations Lead: Sultan Mahamud, Head of Operations, is accountable for AI governance at Intelloro. AI system changes are reviewed by the operations lead before deployment. We will move to a periodic (at minimum annual) re-review of risk-tier classifications as the platform scales.
Vendor models: we use foundation models from third parties (Anthropic, OpenAI, Google). We operate under each provider's published API terms and data-processing terms. We do not transmit personal data of Intelloro users to these providers — their inputs are publicly available vendor information about AI tools (not user PII).
Updates: any material change to AI methodology will be reflected on this page and announced in our changelog.
Contact
AI / Methodology Inquiries: info@intelloro.com — subject “AI Disclosure”
AI Lead: Sultan Mahamud, Head of Operations