Skip to content

Commit

Permalink
update readme with spark (#127)
Browse files Browse the repository at this point in the history
  • Loading branch information
beltran authored Nov 9, 2020
1 parent 31d3ca6 commit 665852d
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
[![Build Status](https://travis-ci.org/beltran/gohive.svg?branch=master)](https://travis-ci.org/beltran/gohive) [![Coverage Status](https://coveralls.io/repos/github/beltran/gohive/badge.svg?branch=master)](https://coveralls.io/github/beltran/gohive?branch=master)


GoHive is a driver for Hive in go that supports connection mechanisms KERBEROS(Gssapi Sasl), NONE(Plain Sasl), LDAP, CUSTOM and NOSASL, both for binary and http transport, with and without SSL. The kerberos mechanism will pick a different authentication level depending on `hive.server2.thrift.sasl.qop`.
GoHive is a driver for Hive and the [Spark Distributed SQL Engine](https://spark.apache.org/docs/latest/sql-distributed-sql-engine.html) in go that supports connection mechanisms KERBEROS(Gssapi Sasl), NONE(Plain Sasl), LDAP, CUSTOM and NOSASL, both for binary and http transport, with and without SSL. The kerberos mechanism will pick a different authentication level depending on `hive.server2.thrift.sasl.qop`.

## Installation
Gohive can be installed with:
Expand Down Expand Up @@ -128,7 +128,7 @@ cursor.FetchOne(context.Background(), &i)
```
Alternatively, using the rowmap API, `m := cursor.RowMap(context.Background())`,
`m` would be `map[string]interface{}{"table_name.column_name": nil}` for a `NULL` value. It will return a map
where the keys are `table_name.column_name`. This works fine with hive but using [Spark Thirft SQL server](https://github.com/apache/spark/blob/master/sbin/start-thriftserver.sh) `table_name` is not present and the keys are `column_name` and it can [lead to problems](https://github.com/beltran/gohive/issues/120) if two tables have the same column name so the `FetchOne` API should be used in this case.
where the keys are `table_name.column_name`. This works fine with hive but using [Spark Thirft SQL server](https://spark.apache.org/docs/latest/sql-distributed-sql-engine.html) `table_name` is not present and the keys are `column_name` and it can [lead to problems](https://github.com/beltran/gohive/issues/120) if two tables have the same column name so the `FetchOne` API should be used in this case.

## Running tests
Tests can be run with:
Expand Down

0 comments on commit 665852d

Please sign in to comment.