MCQs on Working with Data Sources and Sinks | Azure Data Factory MCQs Question

This set of Azure Data Factory MCQs Question is crafted to help you master concepts related to data sources and sinks in Azure Data Factory (ADF). Learn about supported data stores, linked services, and dataset configuration. Gain insights into integrating Azure Blob Storage, SQL Database, Data Lake, on-premises sources, and securely managing credentials using Azure Key Vault.


Chapter 3: Working with Data Sources and Sinks – MCQs

Topic 1: Supported Data Stores (Azure and Non-Azure)

  1. Which of the following is NOT a supported data store in Azure Data Factory?
    a) Azure Blob Storage
    b) Amazon S3
    c) Google Sheets
    d) GitHub
  2. What is the role of a data source in Azure Data Factory?
    a) To monitor pipeline performance
    b) To act as the origin of the data to be processed
    c) To deploy applications
    d) To store diagnostic logs
  3. Which of the following non-Azure data stores can Azure Data Factory connect to?
    a) Amazon RDS
    b) Oracle Database
    c) SAP HANA
    d) All of the above
  4. How does Azure Data Factory enable integration with non-Azure data stores?
    a) Through Azure Monitor alerts
    b) Using integration runtimes
    c) By creating virtual machines
    d) Through manual scripts
  5. Which data store supports direct transformation in Azure Data Factory?
    a) Azure SQL Database
    b) Azure Blob Storage
    c) Azure Data Lake
    d) None of the above
  6. What feature ensures compatibility with third-party data sources in Azure Data Factory?
    a) Dynamic mapping templates
    b) Native connectors and integration runtimes
    c) Custom SQL queries
    d) Role-based access control

Topic 2: Linked Services and Dataset Configuration

  1. What is a linked service in Azure Data Factory?
    a) A reference to a resource connection string
    b) A data transformation rule
    c) An Azure Virtual Network configuration
    d) A method for cost monitoring
  2. Which parameter is mandatory when creating a linked service?
    a) Data pipeline name
    b) Connection credentials
    c) Integration runtime ID
    d) Resource group location
  3. What is the purpose of a dataset in Azure Data Factory?
    a) To define the schema of the data
    b) To schedule pipeline executions
    c) To configure activity triggers
    d) To set up alerts and notifications
  4. Which feature in ADF allows dynamic configuration of datasets?
    a) Custom mapping templates
    b) Parameterized datasets
    c) Pre-defined triggers
    d) Static runtime settings
  5. How can you test a linked service configuration in Azure Data Factory?
    a) By running a data pipeline
    b) By using the Test Connection button
    c) By creating a monitoring alert
    d) By enabling diagnostics
  6. Which authentication type is NOT supported by linked services?
    a) OAuth
    b) Windows Authentication
    c) Basic Authentication
    d) Anonymous

Topic 3: Using Azure Blob Storage, SQL Database, and Data Lake with ADF

  1. What role does Azure Blob Storage play in Azure Data Factory?
    a) Data transformation engine
    b) Temporary or permanent data storage
    c) Logging mechanism for pipelines
    d) Load balancer
  2. How can you load data from Azure Blob Storage into a SQL Database using ADF?
    a) Using a Copy Data activity
    b) Using an Integration Runtime Manager
    c) By deploying a Data Sync service
    d) Through a manual script
  3. What is the main advantage of using Azure Data Lake with Azure Data Factory?
    a) Real-time data visualization
    b) Scalability and compatibility with big data processing
    c) Built-in machine learning models
    d) Cost optimization
  4. How does Azure Data Factory access data in Azure SQL Database?
    a) Through direct queries in the ADF portal
    b) By using a linked service with appropriate credentials
    c) By deploying virtual machines for access
    d) By generating custom reports
  5. What is required for transforming data in Azure Data Lake using ADF?
    a) A transformation runtime
    b) Mapping Data Flows
    c) Diagnostic settings enabled
    d) Pre-configured templates
  6. Which integration method is best for handling large datasets in Azure Data Lake?
    a) PolyBase integration
    b) Batch processing
    c) Real-time streaming
    d) Static configuration

Topic 4: Connecting to On-Premises and Third-Party Sources

  1. How does Azure Data Factory connect to on-premises data sources?
    a) Using a self-hosted integration runtime
    b) By configuring an Azure virtual network
    c) Through direct Azure portal access
    d) By deploying SQL Data Sync
  2. What is required to connect ADF with a third-party source like Salesforce?
    a) OAuth authentication and API integration
    b) Role-based access control policies
    c) Static dataset configuration
    d) Custom SQL scripts
  3. How can latency issues be minimized when connecting to on-premises sources?
    a) Using Azure Traffic Manager
    b) Optimizing the self-hosted integration runtime
    c) Deploying multiple pipelines
    d) Enabling alerts
  4. Which of the following can act as on-premises data sources for ADF?
    a) MySQL Database
    b) Oracle Database
    c) SAP HANA
    d) All of the above
  5. How can you ensure secure data transfer from on-premises to Azure?
    a) By encrypting data in transit
    b) By using shared network drives
    c) By enabling Azure Monitor
    d) Through diagnostic settings
  6. What is the role of a gateway in connecting on-premises data to Azure?
    a) To provide secure and reliable communication
    b) To encrypt secrets stored in Azure Key Vault
    c) To monitor resource usage
    d) To act as a backup mechanism

Topic 5: Managing Credentials with Azure Key Vault

  1. How does Azure Key Vault enhance security in Azure Data Factory?
    a) By storing connection credentials securely
    b) By creating additional linked services
    c) By monitoring pipeline execution logs
    d) By generating reports
  2. How can credentials in Azure Key Vault be used in ADF pipelines?
    a) By referencing them in linked services
    b) By embedding them in activity scripts
    c) Through manual updates in datasets
    d) By enabling guest access
  3. What is required to integrate ADF with Azure Key Vault?
    a) Proper permissions for the managed identity
    b) A diagnostic setting for the Key Vault
    c) A static dataset configuration
    d) An Azure Storage account
  4. Which operation is NOT supported by Azure Key Vault in ADF?
    a) Encrypting database queries
    b) Storing API keys and secrets
    c) Managing certificates
    d) Rotating credentials automatically
  5. How can you update credentials stored in Azure Key Vault for ADF?
    a) By replacing the existing secret version
    b) By deleting and recreating the linked service
    c) By running a PowerShell script
    d) By restarting the pipeline
  6. Why is Azure Key Vault recommended for managing credentials in ADF?
    a) Centralized and secure storage of secrets
    b) Automated pipeline monitoring
    c) Reduced latency in data transfer
    d) Improved pipeline performance

Answer Key

QnoAnswer
1c) Google Sheets
2b) To act as the origin of the data to be processed
3d) All of the above
4b) Using integration runtimes
5d) None of the above
6b) Native connectors and integration runtimes
7a) A reference to a resource connection string
8b) Connection credentials
9a) To define the schema of the data
10b) Parameterized datasets
11b) By using the Test Connection button
12d) Anonymous
13b) Temporary or permanent data storage
14a) Using a Copy Data activity
15b) Scalability and compatibility with big data processing
16b) By using a linked service with appropriate credentials
17b) Mapping Data Flows
18a) PolyBase integration
19a) Using a self-hosted integration runtime
20a) OAuth authentication and API integration
21b) Optimizing the self-hosted integration runtime
22d) All of the above
23a) By encrypting data in transit
24a) To provide secure and reliable communication
25a) By storing connection credentials securely
26a) By referencing them in linked services
27a) Proper permissions for the managed identity
28a) Encrypting database queries
29a) By replacing the existing secret version
30a) Centralized and secure storage of secrets

Use a Blank Sheet, Note your Answers and Finally tally with our answer at last. Give Yourself Score.

X
error: Content is protected !!
Scroll to Top