Configure GCP for a Google BigQuery sink connector#
To be able to sink data from Apache Kafka® to Google BigQuery via the dedicated Aiven connector, you need to perform the following steps in the GCP console:
Create a new Google service account and generate a JSON service key
Verify that BigQuery API is enabled
Create a new BigQuery dataset or define an existing one where the data is going to be stored
Create a new Google service account and generate a JSON service key#
Follow the instructions to:
create a new Google service account
create a JSON service key
The JSON service key will be used in the connector configuration
Verify that BigQuery API is enabled#
The BigQuery sink connector uses the API to push the data. To enable them:
Navigate to the GCP API & Services dashboard and click on the BigQuery API
Verify the BigQuery API is already enabled or follow the steps given to enable it
Create the Google BigQuery dataset#
You can either send the Apache Kafka data to an existing Google BigQuery dataset or create a new one using the GCP console by following the instructions in the dedicated page.
Tip
When creating the dataset, specify data location in a region close to where your Aiven for Apache Kafka is running, to minimize latency.
Grant dataset access to the service account#
The newly created service account needs to have access to the dataset in order to write data to it. Follow the dedicated instructions to check and modify the dataset permissions. The BigQuery Data Editor is sufficient for the connector to work.