Learn, Practice, and Improve with SAP C_AIG_2412 Practice Test Questions
- 63 Questions
- Updated on: 7-Apr-2026
- SAP Certified Associate - SAP Generative AI Developer
- Valid Worldwide
- 2630+ Prepared
- 4.9/5.0
You want to use the orchestration service through SAP's generative-Al-hub-sdk. What does the following code do?
from gen_ai_hub.orchestration.models.11m import LLM 11m =
LLM(name="gpt-40", version="latest", parameters={"max_tokens": 256, "temperature": 0.2})
A. Define the LLM
B. Run the Orchestration Request
C. Create the Orchestration Configuration
D. Define the Template and Default Input Values
Explanation:
The provided code snippet is used to instantiate a Python object that represents a specific Large Language Model (LLM) configuration within the SAP Generative AI Hub SDK.
Why Other Options are Incorrect
B (Run the Orchestration Request):
Running the request requires a separate method call, typically orchestration_service.run(config=...). The instantiation of the LLM class does not trigger a network request.
C (Create the Orchestration Configuration):
This code defines one part of the configuration. Creating the full OrchestrationConfig requires passing this llm object (and a template) into the OrchestrationConfig class constructor.
D (Define the Template and Default Input Values):
This task is handled by the Template class (e.g., from gen_ai_hub.orchestration.models.template import Template). The LLM class is specifically for model-related settings, not the prompt structure or placeholders.
Reference
SAP Learning Journey: Using SAP Cloud SDK for AI to Interact with Orchestration Services.
SAP Cloud SDK for AI (Python) Documentation: API Reference – gen_ai_hub.orchestration.models.llm.
What are some metrics to evaluate the effectiveness of a Retrieval Augmented Generation system? Note: There are 2 correct answers to this question.
A. Carbon footprint
B. Faithfulness
C. Speed
D. Relevance
D. Relevance
Explanation:
Evaluating a Retrieval-Augmented Generation (RAG) system requires specific metrics that assess both the retrieval and generation components. Two fundamental metrics consistently identified across academic and industry literature are faithfulness and relevance.
B. Faithfulness (also called groundedness)
measures whether the generated answer stays true to the retrieved context without hallucinating or inventing information . It checks if each statement in the response is supported by the provided source documents . High faithfulness (typically >0.8-0.9 for production systems) ensures the model does not fabricate facts . This metric is critical because even with perfect retrieval, LLMs can still generate unsupported content .
D. Relevance (answer relevancy)
assesses how well the generated response actually addresses the user's query . It evaluates whether the output is on-topic and useful, not just factually correct . The metric often uses semantic similarity approaches, such as generating hypothetical questions from the answer and comparing them to the original query . Target scores of 0.7-0.8 indicate acceptable user experience .
These two metrics, together with context relevance, form what is known as the "RAG Triad" for comprehensive RAG evaluation . A systematic literature review confirms context relevance, faithfulness, and answer relevance as key evaluation dimensions .
Why Other Options Are Incorrect
A. Carbon footprint:
Incorrect because while environmental impact may be an organizational consideration, it is not a standard metric for evaluating RAG system effectiveness. RAG evaluation focuses on accuracy, retrieval quality, and generation quality .
C. Speed:
Incorrect as latency and throughput are operational performance metrics, not measures of RAG system effectiveness . While important for production deployment, speed does not evaluate whether the system correctly retrieves information or generates accurate answers.
References
Systematic Literature Review on RAG Evaluation Dimensions (IC3K 2024)
Redis RAG Evaluation Guide
DeepEval RAG Triad Documentation
Which of the following are functionalities provided by the generative-Al-hub-SDK ? Note: There are 2 correct answers to this question.
A. Interact with LLMs
B. Configure SAP BTP credentials
C. Customize SAP AI Launchpad
D. Create chat responses and embeddings
D. Create chat responses and embeddings
Explanation:
A. Interact with LLMs:
The generative-ai-hub-SDK is specifically designed to enable interaction with Large Language Models (LLMs) deployed in SAP AI Core. According to SAP Learning materials, the SDK provides model access by wrapping native SDKs of model providers (OpenAI, Amazon, Google) and allows developers to "build basic prompts" and "generate responses for queries using the SDK" . The SDK's foundation-models package provides direct access to models like GPT-4, GPT-3.5 Turbo, and Gemini for chat completions .
D. Create chat responses and embeddings:
The SDK explicitly supports both chat completion and embedding creation. The foundation-models package includes two client types: AzureOpenAiChatClient for chat completions with streaming support, and AzureOpenAiEmbeddingClient for generating vector embeddings using models like "text-embedding-ada-002" . The SDK documentation confirms that embeddings can be generated "from text data using the text-embedding-ada-002 model" for use in RAG implementations .
Why Other Options Are Incorrect
B. Configure SAP BTP credentials:
Incorrect because credential configuration is handled outside the SDK through environment variables (AICORE_CLIENT_ID, AICORE_CLIENT_SECRET, etc.) or service bindings . The SDK reads these pre-configured credentials but does not provide functionality to configure them.
C. Customize SAP AI Launchpad:
Incorrect because SAP AI Launchpad is a separate web-based UI service for centralized AI lifecycle management . The generative-ai-hub-SDK is for programmatic interaction with AI models, not for customizing the Launchpad interface.
References
SAP Learning:
"Using SAP Cloud SDK for AI to Leverage the Power of LLMs"
What capabilities does the Exploration and Development feature of the generative Al hub provide? Note: There are 2 correct answers to this question.
A. Al playground and chat
B. Automatic model selection
C. Develop and debug ABAP code
D. Prompt editor and management
D. Prompt editor and management
Explanation:
The Exploration and Development area within the SAP AI Launchpad (Generative AI Hub) is designed as the primary workspace for prompt engineers and developers to experiment with Large Language Models (LLMs) before integrating them into applications.
AI Playground and Chat (A):
This feature provides a conversational interface where users can interact with various models (like GPT-4, Falcon, or Llama) in real-time. It allows for "chat-based" testing to see how different models respond to the same query, helping developers select the best model for their specific business case.
Prompt Editor and Management (D):
This includes the technical tools to craft precise prompts. The Editor allows for the configuration of parameters (temperature, stop sequences), while Management provides a registry to save, version, and organize prompts. This ensures that a prompt used in a "testing" phase can be reliably moved into a "production" environment via an API.
Why Other Options are Incorrect
B (Automatic model selection):
The Generative AI Hub does not automatically choose models for the user. While it provides a side-by-side comparison tool, the developer must manually select which model to use based on performance, cost, and latency requirements.
C (Develop and debug ABAP code):
While you can use an LLM within the hub to generate ABAP code snippets, the hub itself is not an Integrated Development Environment (IDE) for ABAP. Debugging and developing ABAP is performed in SAP ADT (ABAP Development Tools) or the SAP Build Code environment.
Reference
SAP Help Portal: SAP AI Launchpad – Exploration and Development in Generative AI Hub.
SAP Learning Journey: Develop AI-Powered Applications – Using the Generative AI Hub Playground.
How does SAP ensure the enterprise-readiness of its Al solutions?
A. By implementing rigorous product standards for Al capabilities
B. By ensuring that Al models make bias-free decisions without human input
C. By using generic Al models without business context complying with Al ethics standards
Explanation
SAP ensures the enterprise-readiness of its AI solutions through the implementation of rigorous product standards across the entire AI development lifecycle. This approach encompasses multiple dimensions: adherence to global AI ethics principles (relevance, reliability, responsibility, respect), compliance with regulatory frameworks like the EU AI Act, robust data privacy and security measures, and seamless integration with existing enterprise systems. SAP's AI solutions are designed to be scalable, explainable, and auditable, meeting the demanding requirements of business-critical enterprise environments. These product standards ensure that AI capabilities deliver consistent, trustworthy, and compliant outcomes that organizations can confidently deploy.
Why Other Options Are Incorrect
B. By ensuring that AI models make bias-free decisions without human input:
Incorrect because achieving completely bias-free automated decisions without human oversight is neither realistic nor SAP's approach. SAP emphasizes human-in-the-loop principles where AI augments human decision-making rather than replacing it entirely, and continuous monitoring addresses bias rather than claiming absolute bias-free outputs.
C. By using generic AI models without business context complying with AI ethics standards:
Incorrect because SAP specifically embeds AI into business context rather than using generic models in isolation. SAP's strategy involves integrating AI deeply into business processes with relevant contextual data, ensuring outputs are meaningful for specific enterprise scenarios, not just applying generic models with ethics compliance.
References
SAP Trust Center: "Artificial Intelligence"
SAP News Center: "SAP Embeds Ethical Principles into AI Development"
SAP Insights: "Enterprise AI Strategy"
Which of the following techniques uses a prompt to generate or complete subsequent prompts (streamlining the prompt development process), and to effectively guide Al model responses?
A. Chain-of-thought prompting
B. Few-shot prompting
C. Meta prompting
D. One-shot prompting
Explanation:
Meta prompting is a technique where a prompt is used to generate or complete subsequent prompts, thereby streamlining the prompt development process and effectively guiding AI model responses. The prefix "meta" signifies that the prompt operates at a higher level of abstraction—it is a prompt about prompting itself. Instead of directly asking the model for a final answer, the user first asks the model to create a better, more detailed, or more structured prompt for a specific task. This approach leverages the model's understanding of prompt engineering to automate and improve the prompt creation process. It is particularly useful for refining instructions, ensuring consistency across multiple queries, and optimizing prompts for complex tasks without manual trial and error.
Why Other Options Are Incorrect
A. Chain-of-thought prompting:
Incorrect because chain-of-thought prompting encourages the model to reason step-by-step before providing a final answer, improving logical reasoning. It does not involve generating subsequent prompts; it generates intermediate reasoning steps within a single response.
B. Few-shot prompting:
Incorrect as few-shot prompting provides the model with several examples of desired input-output behavior within the prompt to demonstrate the task. While it guides responses, it does not use a prompt to generate or complete other prompts.
D. One-shot prompting:
Incorrect because one-shot prompting provides a single example to illustrate the desired task. Like few-shot, it is an in-context learning technique but does not involve generating subsequent prompts for streamlining development.
References
SAP Learning: "Prompt Engineering Best Practices"
Prompt Engineering Guide (DAIR.AI): "Meta Prompting"
What are some examples of generative Al technologies? Note: There are 2 correct answers to this question.
A. Al models that generate new content based on training data
B. Rule-based algorithms
C. Robotic process automation
D. Foundation models
D. Foundation models
Explanation:
A. AI models that generate new content based on training data
This is the fundamental definition of generative AI technology. Generative AI refers to artificial intelligence systems capable of creating new, original content—such as text, images, code, audio, or video—that mirrors the patterns and structures learned from their training data. Unlike discriminative AI, which classifies or predicts based on existing data, generative models produce novel outputs. This capability underpins applications like ChatGPT generating conversational text, DALL-E creating images from descriptions, or GitHub Copilot writing code.
D. Foundation models
Foundation models are a specific class of generative AI technologies. These are large-scale AI models trained on vast amounts of broad data that can be adapted to a wide range of downstream tasks. Examples include large language models (LLMs) like GPT-4, PaLM, and LLaMA, as well as multimodal models like GPT-4V or DALL-E. Foundation models serve as the technological backbone for most modern generative AI applications, providing the core generative capabilities that can be fine-tuned or prompted for specific use cases.
Why Other Options Are Incorrect
B. Rule-based algorithms:
Incorrect because rule-based algorithms follow predefined logical rules and do not learn from data or generate novel content. They represent traditional symbolic AI, not generative AI, which relies on learning patterns from training data.
C. Robotic process automation:
Incorrect as RPA automates repetitive, rule-based tasks by mimicking human interactions with digital systems. It does not generate new content or involve learning from data, placing it outside generative AI technologies.
References
SAP Learning: "Introduction to Generative
Stanford CRFM: "On the Opportunities and Risks of Foundation Models"
| Page 4 out of 9 Pages |