This set of Azure Data Factory MCQs Question is crafted to help you master concepts related to data sources and sinks in Azure Data Factory (ADF). Learn about supported data stores, linked services, and dataset configuration. Gain insights into integrating Azure Blob Storage, SQL Database, Data Lake, on-premises sources, and securely managing credentials using Azure Key Vault.
Chapter 3: Working with Data Sources and Sinks – MCQs
Topic 1: Supported Data Stores (Azure and Non-Azure)
Which of the following is NOT a supported data store in Azure Data Factory? a) Azure Blob Storage b) Amazon S3 c) Google Sheets d) GitHub
What is the role of a data source in Azure Data Factory? a) To monitor pipeline performance b) To act as the origin of the data to be processed c) To deploy applications d) To store diagnostic logs
Which of the following non-Azure data stores can Azure Data Factory connect to? a) Amazon RDS b) Oracle Database c) SAP HANA d) All of the above
How does Azure Data Factory enable integration with non-Azure data stores? a) Through Azure Monitor alerts b) Using integration runtimes c) By creating virtual machines d) Through manual scripts
Which data store supports direct transformation in Azure Data Factory? a) Azure SQL Database b) Azure Blob Storage c) Azure Data Lake d) None of the above
What feature ensures compatibility with third-party data sources in Azure Data Factory? a) Dynamic mapping templates b) Native connectors and integration runtimes c) Custom SQL queries d) Role-based access control
Topic 2: Linked Services and Dataset Configuration
What is a linked service in Azure Data Factory? a) A reference to a resource connection string b) A data transformation rule c) An Azure Virtual Network configuration d) A method for cost monitoring
Which parameter is mandatory when creating a linked service? a) Data pipeline name b) Connection credentials c) Integration runtime ID d) Resource group location
What is the purpose of a dataset in Azure Data Factory? a) To define the schema of the data b) To schedule pipeline executions c) To configure activity triggers d) To set up alerts and notifications
Which feature in ADF allows dynamic configuration of datasets? a) Custom mapping templates b) Parameterized datasets c) Pre-defined triggers d) Static runtime settings
How can you test a linked service configuration in Azure Data Factory? a) By running a data pipeline b) By using the Test Connection button c) By creating a monitoring alert d) By enabling diagnostics
Which authentication type is NOT supported by linked services? a) OAuth b) Windows Authentication c) Basic Authentication d) Anonymous
Topic 3: Using Azure Blob Storage, SQL Database, and Data Lake with ADF
What role does Azure Blob Storage play in Azure Data Factory? a) Data transformation engine b) Temporary or permanent data storage c) Logging mechanism for pipelines d) Load balancer
How can you load data from Azure Blob Storage into a SQL Database using ADF? a) Using a Copy Data activity b) Using an Integration Runtime Manager c) By deploying a Data Sync service d) Through a manual script
What is the main advantage of using Azure Data Lake with Azure Data Factory? a) Real-time data visualization b) Scalability and compatibility with big data processing c) Built-in machine learning models d) Cost optimization
How does Azure Data Factory access data in Azure SQL Database? a) Through direct queries in the ADF portal b) By using a linked service with appropriate credentials c) By deploying virtual machines for access d) By generating custom reports
What is required for transforming data in Azure Data Lake using ADF? a) A transformation runtime b) Mapping Data Flows c) Diagnostic settings enabled d) Pre-configured templates
Which integration method is best for handling large datasets in Azure Data Lake? a) PolyBase integration b) Batch processing c) Real-time streaming d) Static configuration
Topic 4: Connecting to On-Premises and Third-Party Sources
How does Azure Data Factory connect to on-premises data sources? a) Using a self-hosted integration runtime b) By configuring an Azure virtual network c) Through direct Azure portal access d) By deploying SQL Data Sync
What is required to connect ADF with a third-party source like Salesforce? a) OAuth authentication and API integration b) Role-based access control policies c) Static dataset configuration d) Custom SQL scripts
How can latency issues be minimized when connecting to on-premises sources? a) Using Azure Traffic Manager b) Optimizing the self-hosted integration runtime c) Deploying multiple pipelines d) Enabling alerts
Which of the following can act as on-premises data sources for ADF? a) MySQL Database b) Oracle Database c) SAP HANA d) All of the above
How can you ensure secure data transfer from on-premises to Azure? a) By encrypting data in transit b) By using shared network drives c) By enabling Azure Monitor d) Through diagnostic settings
What is the role of a gateway in connecting on-premises data to Azure? a) To provide secure and reliable communication b) To encrypt secrets stored in Azure Key Vault c) To monitor resource usage d) To act as a backup mechanism
Topic 5: Managing Credentials with Azure Key Vault
How does Azure Key Vault enhance security in Azure Data Factory? a) By storing connection credentials securely b) By creating additional linked services c) By monitoring pipeline execution logs d) By generating reports
How can credentials in Azure Key Vault be used in ADF pipelines? a) By referencing them in linked services b) By embedding them in activity scripts c) Through manual updates in datasets d) By enabling guest access
What is required to integrate ADF with Azure Key Vault? a) Proper permissions for the managed identity b) A diagnostic setting for the Key Vault c) A static dataset configuration d) An Azure Storage account
Which operation is NOT supported by Azure Key Vault in ADF? a) Encrypting database queries b) Storing API keys and secrets c) Managing certificates d) Rotating credentials automatically
How can you update credentials stored in Azure Key Vault for ADF? a) By replacing the existing secret version b) By deleting and recreating the linked service c) By running a PowerShell script d) By restarting the pipeline
Why is Azure Key Vault recommended for managing credentials in ADF? a) Centralized and secure storage of secrets b) Automated pipeline monitoring c) Reduced latency in data transfer d) Improved pipeline performance
Answer Key
Qno
Answer
1
c) Google Sheets
2
b) To act as the origin of the data to be processed
3
d) All of the above
4
b) Using integration runtimes
5
d) None of the above
6
b) Native connectors and integration runtimes
7
a) A reference to a resource connection string
8
b) Connection credentials
9
a) To define the schema of the data
10
b) Parameterized datasets
11
b) By using the Test Connection button
12
d) Anonymous
13
b) Temporary or permanent data storage
14
a) Using a Copy Data activity
15
b) Scalability and compatibility with big data processing
16
b) By using a linked service with appropriate credentials