When you send a prompt in campusGenAI, it goes through the steps below. We are sharing this to help you understand where moderation or filtering might take place.
1. Prompt submission
- Your message is captured by the campusGenAI platform.
- Built-in moderation is currently disabled, so your text is not filtered at this stage. Once the auto-moderation feature is re-enabled, we will update the filter information accordingly.
2. Model routing and safety layers
Once sent, your prompt goes through both the cloud platform's baseline safety checks and the safety policies of the specific model provider. Depending on the model or agent you choose, your prompt is forwarded to one of the following providers.
Note: Though prompts are sent to one of the following model providers, you data is not used to train any of the models accessible from this platform, as stated in the following policies: Azure OpenAI, Azure Foundry, and AWS Bedrock.
Tip: Read the description in the platform's model dropdown menu to see which cloud platform (Azure or AWS) a model is being accessed through.
Azure AI Foundry
- campusGenAI sends your prompt to a model hosted through Azure AI Foundry.
- It is first processed by Azure's built-in content filtering systems. Azure evaluates both the input prompt and the model's output against defined safety categories and severity thresholds. (Microsoft, Azure OpenAI Content Filtering).
- Currently, guardrails are configured using the default safety thresholds. Azure provides the ability to adjust these thresholds to be more or less restrictive. (Microsoft, Default Guardrail policies for Azure OpenAI).
- Content that exceeds those thresholds may be blocked before it is returned the following 400 error.
- After platform-level filtering, the request is routed to the selected model (e.g., GPT models, Grok, DeepSeek models), which also operates under its own published usage and safety policies, and may refuse to generate responses that violate those policies. (For example, Open AI Usage Policies).
AWS Bedrock
- campusGenAI sends your prompt to a model hosted through Amazon Bedrock which happens within the AWS Service boundary where the requests are authenticated, encrypted in transit, and processed within the AWS system, before being routed to the selected foundation model.
- Amazon Bedrock Guardrails provide additional controls that can evaluate both input prompts and model outputs against defined policies, such as content categories, denied topics, etc. (AWS, Amazon Bedrock Guardrails). Currently, the Bedrock Guardrails are not configured.
- The selected model provider (e.g., Anthropic Claude, Mistral, Llama) operates under its own published safety and acceptable use policies. (For example, Anthropic Usage Policy).
Tools
Tools or plugins may include their own instructions. These instructions are added as a system prompt and sent to the AI model along with the user’s request. In our platform’s model selection menu, models marked with web search and image generation icons (🔍🖼️) include an additional system prompt that instructs the AI how to use these tools. Current system prompt:
You are an AI assistant that supports three main tasks: chatting, web search, and image generation.
If the user is chatting or asking questions, respond naturally using your language model capabilities.
If the user explicitly asks to search the internet (e.g., “search online”, “look it up on the web”), use the google tool to find accurate and up-to-date information.
If the user requests an image or illustration, use the image_gen_oai tool. Ask for clarification if the description is vague.
Always follow the user’s intent.
Note that today’s date and time: {{current_datetime}}
Web Search
- If using a web search-enabled model or agent (🔍), queries are sent to the Google Search API when prompted to search the web.
- Results are subject to Google's own SafeSearch settings and search policies.
Image Generation
- If using an image generation-enabled model or agent (🖼️), image requests are sent to OpenAI’s GPT-Image-1 model, provided through Azure AI Foundry, when the model is prompted to generate an image.
- Image generation follows Azure’s default safety thresholds.
3. Model response
- The model returns a response.
- Azure and AWS Bedrock apply the same safety layers to outgoing responses.
- campusGenAI displays the result to you (and would apply moderation here as well, if it were enabled).
Frequently asked questions
Even though campusGenAI's auto-moderation feature is off, Azure and Bedrock may still block prompts that violate their safety policies.
Not currently. We rely on the platform providers' built-in safety measures as part of our compliance obligations.