MCQs on Model Deployment | AWS Amazon SageMaker MCQs Question

Dive into these AWS Amazon SageMaker MCQ questions and answers designed to cover essential topics such as Model Deployment, Hosting Models on SageMaker, Scaling and Monitoring Endpoints, and A/B Testing and Model Updates. Perfect for developers, data scientists, and cloud enthusiasts aiming to master SageMaker concepts for certifications and beyond.


4. Model Deployment


1-10: Hosting Models on SageMaker
  1. What is the primary purpose of hosting a model on Amazon SageMaker?
    a) To build machine learning models
    b) To deploy and serve machine learning models for predictions
    c) To store training datasets
    d) To visualize data
  2. Which SageMaker feature allows models to be hosted with automatic scaling?
    a) SageMaker Training Jobs
    b) SageMaker Endpoints
    c) SageMaker Pipelines
    d) SageMaker Feature Store
  3. How are models typically deployed to an endpoint in SageMaker?
    a) By uploading them to S3
    b) By creating a SageMaker hosting endpoint
    c) By configuring an EC2 instance manually
    d) By using AWS Batch
  4. Which SageMaker resource is used to configure the hardware for hosting models?
    a) Endpoint configuration
    b) Training job configuration
    c) Notebook instance
    d) Feature group
  5. What is the main benefit of SageMaker Multi-Model Endpoints?
    a) Reduced latency for inference
    b) Hosting multiple models on a single endpoint
    c) Increased accuracy for model predictions
    d) Integration with S3 buckets
  6. What type of input does a SageMaker endpoint typically accept for predictions?
    a) Structured query language (SQL)
    b) Application programming interface (API) calls
    c) CloudFormation templates
    d) Docker containers
  7. How can you secure access to SageMaker endpoints?
    a) By using IAM policies and VPC configurations
    b) By enabling public access
    c) By encrypting S3 buckets
    d) By setting up EC2 key pairs
  8. What format must a trained model be in before deploying on SageMaker?
    a) JSON
    b) Serialized model artifacts
    c) CSV
    d) Text file
  9. Which AWS service is often used alongside SageMaker for secure storage of model artifacts?
    a) Amazon RDS
    b) Amazon S3
    c) AWS Glue
    d) Amazon Redshift
  10. What is the main use of the SageMaker runtime API?
    a) To monitor endpoint metrics
    b) To make inference requests to a deployed model
    c) To visualize training jobs
    d) To build pipelines for data preparation

11-18: Scaling and Monitoring Endpoints
  1. Which feature in SageMaker automatically adjusts the number of instances for an endpoint based on demand?
    a) Auto Scaling for Endpoints
    b) Endpoint Replication
    c) Load Balancer Scaling
    d) Multi-AZ Scaling
  2. What metric can be monitored to check endpoint performance in SageMaker?
    a) Instance Health
    b) CPU Utilization
    c) Invocations per Second
    d) Training Accuracy
  3. Where can you view SageMaker endpoint metrics?
    a) AWS CloudWatch
    b) Amazon QuickSight
    c) AWS Glue
    d) AWS Cost Explorer
  4. Which scaling option is best for endpoints experiencing sudden traffic spikes?
    a) Manual Scaling
    b) Predictive Scaling
    c) Dynamic Auto Scaling
    d) Scheduled Scaling
  5. What is the primary benefit of monitoring SageMaker endpoints?
    a) Reducing costs
    b) Detecting anomalies in model predictions
    c) Enhancing model accuracy
    d) Automating retraining of models
  6. Which SageMaker feature helps in identifying and troubleshooting endpoint issues?
    a) Endpoint Debugger
    b) Model Monitor
    c) Training Monitor
    d) SageMaker Studio
  7. What does the “InvocationsFailed” metric indicate in a SageMaker endpoint?
    a) Number of failed training jobs
    b) Failed inference requests
    c) Degraded endpoint health
    d) Model artifact corruption
  8. How can you scale a SageMaker endpoint to handle high traffic?
    a) Increase the instance count in endpoint configuration
    b) Upgrade the SageMaker Studio subscription
    c) Use AWS Data Pipeline
    d) Migrate the endpoint to an EC2 instance

19-25: A/B Testing and Model Updates
  1. What is A/B testing in SageMaker primarily used for?
    a) Comparing two or more machine learning models
    b) Automating training processes
    c) Monitoring endpoint performance
    d) Optimizing AWS resources
  2. How is traffic distributed between models during A/B testing in SageMaker?
    a) Based on availability zones
    b) By defining traffic weights for each model
    c) By splitting users into groups
    d) Using AWS Lambda functions
  3. Which method is commonly used to update a deployed model in SageMaker?
    a) Redeploying the endpoint with a new model
    b) Using AWS Batch processing
    c) Restarting the training job
    d) Editing the endpoint policy
  4. What is the advantage of A/B testing in SageMaker?
    a) It improves endpoint monitoring
    b) It allows testing new models without disrupting live traffic
    c) It reduces data processing costs
    d) It enables distributed training
  5. During A/B testing, what happens when the newer model performs better?
    a) Both models are deleted
    b) Traffic is gradually shifted to the new model
    c) The older model is archived automatically
    d) The endpoint scales down
  6. How does SageMaker help minimize downtime during model updates?
    a) By creating new endpoints in advance
    b) By enabling seamless endpoint transition
    c) By using on-demand instance scaling
    d) By pausing inference requests
  7. What configuration is required for enabling A/B testing in SageMaker?
    a) Setting traffic splitting in endpoint configuration
    b) Integrating with AWS Config
    c) Updating the IAM role
    d) Using DynamoDB triggers

Answer Key

QnoAnswer (Option with Text)
1b) To deploy and serve machine learning models for predictions
2b) SageMaker Endpoints
3b) By creating a SageMaker hosting endpoint
4a) Endpoint configuration
5b) Hosting multiple models on a single endpoint
6b) Application programming interface (API) calls
7a) By using IAM policies and VPC configurations
8b) Serialized model artifacts
9b) Amazon S3
10b) To make inference requests to a deployed model
11a) Auto Scaling for Endpoints
12c) Invocations per Second
13a) AWS CloudWatch
14c) Dynamic Auto Scaling
15b) Detecting anomalies in model predictions
16b) Model Monitor
17b) Failed inference requests
18a) Increase the instance count in endpoint configuration
19a) Comparing two or more machine learning models
20b) By defining traffic weights for each model
21a) Redeploying the endpoint with a new model
22b) It allows testing new models without disrupting live traffic
23b) Traffic is gradually shifted to the new model
24b) By enabling seamless endpoint transition
25a) Setting traffic splitting in endpoint configuration

Use a Blank Sheet, Note your Answers and Finally tally with our answer at last. Give Yourself Score.

X
error: Content is protected !!
Scroll to Top