-
Notifications
You must be signed in to change notification settings - Fork 18
/
.readme-partials.yaml
107 lines (86 loc) · 3.46 KB
/
.readme-partials.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
introduction: |-
> Node.js idiomatic client for [BigQuery Storage](https://cloud.google.com/bigquery).
The BigQuery Storage product is divided into two major APIs: Write and Read API.
BigQuery Storage API does not provide functionality related to managing BigQuery
resources such as datasets, jobs, or tables.
The BigQuery Storage Write API is a unified data-ingestion API for BigQuery.
It combines streaming ingestion and batch loading into a single high-performance API.
You can use the Storage Write API to stream records into BigQuery in real time or
to batch process an arbitrarily large number of records and commit them in a single
atomic operation.
Read more in our [introduction guide](https://cloud.google.com/bigquery/docs/write-api).
Using a system provided default stream, this code sample demonstrates using the
schema of a destination stream/table to construct a writer, and send several
batches of row data to the table.
```javascript
const {adapt, managedwriter} = require('@google-cloud/bigquery-storage');
const {WriterClient, JSONWriter} = managedwriter;
async function appendJSONRowsDefaultStream() {
const projectId = 'my_project';
const datasetId = 'my_dataset';
const tableId = 'my_table';
const destinationTable = `projects/${projectId}/datasets/${datasetId}/tables/${tableId}`;
const writeClient = new WriterClient({projectId});
try {
const writeStream = await writeClient.getWriteStream({
streamId: `${destinationTable}/streams/_default`,
view: 'FULL'
});
const protoDescriptor = adapt.convertStorageSchemaToProto2Descriptor(
writeStream.tableSchema,
'root'
);
const connection = await writeClient.createStreamConnection({
streamId: managedwriter.DefaultStream,
destinationTable,
});
const streamId = connection.getStreamId();
const writer = new JSONWriter({
streamId,
connection,
protoDescriptor,
});
let rows = [];
const pendingWrites = [];
// Row 1
let row = {
row_num: 1,
customer_name: 'Octavia',
};
rows.push(row);
// Row 2
row = {
row_num: 2,
customer_name: 'Turing',
};
rows.push(row);
// Send batch.
let pw = writer.appendRows(rows);
pendingWrites.push(pw);
rows = [];
// Row 3
row = {
row_num: 3,
customer_name: 'Bell',
};
rows.push(row);
// Send batch.
pw = writer.appendRows(rows);
pendingWrites.push(pw);
const results = await Promise.all(
pendingWrites.map(pw => pw.getResult())
);
console.log('Write results:', results);
} catch (err) {
console.log(err);
} finally {
writeClient.close();
}
}
```
The BigQuery Storage Read API provides fast access to BigQuery-managed storage by
using an gRPC based protocol. When you use the Storage Read API, structured data is
sent over the wire in a binary serialization format. This allows for additional
parallelism among multiple consumers for a set of results.
Read more how to [use the BigQuery Storage Read API](https://cloud.google.com/bigquery/docs/reference/storage).
See sample code on the [Quickstart section](#quickstart).