Our Principles
How AI Is Used in Ishara
Signal classification. When keyword-based rules can't confidently categorise a signal, an LLM assists in classifying it to the right business system and sector.
Interpretation and analysis. Once signals are collected and classified, an LLM (Anthropic's Claude) generates the daily briefing — including system assessments, pathway analysis, investigation prompts, and suggested actions.
Contextual enrichment. Perplexity's Sonar model provides additional context for each sub-sector query, which is then factored into the main analysis.
What AI Does Not Do
- AI does not make predictions or forecasts.
- AI does not access your email, files, or any data beyond your Ishara profile.
- AI does not make decisions on your behalf.
- AI does not generate financial advice, legal opinions, or professional recommendations.
Known Limitations
- Errors happen. LLMs can produce inaccurate or outdated information. Always cross-reference with primary sources.
- Data gaps. Coverage varies by country and sector. When data is limited, Ishara flags this as a "blind spot" rather than guessing.
- No real-time guarantees. Briefings are generated from data collected during the pipeline run, not live feeds.
- English-only analysis. Source signals in Arabic or other languages may lose nuance in processing.
Language Standards
Ishara's AI output follows strict language rules designed to prevent alarm or overconfidence:
- No crisis language (words like "collapse", "urgent", "must" are prohibited).
- No percentage predictions or certainty claims.
- No imperatives — we suggest what to investigate, not what to do.
- Every observation must cite a named source.
Feedback Loop
Your reactions and calibration inputs directly improve future briefings. When you mark a signal as irrelevant or confirm an insight was useful, that feedback shapes how Ishara prioritises information for your business type.
Questions
Want to know more about how AI is used in your briefings? Contact us at hello@isharatoday.com.