18.81. Release 0.76

MySQL and PostgreSQL Connectors

This release adds the MySQL Connector and PostgreSQL Connector for querying and creating tables in external relational databases. These can be used to join or copy data between different systems like MySQL and Hive, or between two different MySQL or PostgreSQL instances, or any combination.

Hive Changes

The new Hive Connector configuration property hive.s3.socket-timeout allows changing the socket timeout for queries that read or write to Amazon S3. Additionally, the previously added hive.s3.max-connections property was not respected and always used the default of 500.

Hive allows the partitions in a table to have a different schema than the table. In particular, it allows changing the type of a column without changing the column type of existing partitions. The Hive connector does not support this and could previously return garbage data for partitions stored using the RCFile Text format if the column type was converted from a non-numeric type such as STRING to a numeric type such as BIGINT and the actual data in existing partitions was not numeric. The Hive connector now detects this scenario and fails the query after the partition metadata has been read.

The property hive.storage-format is broken and has been disabled. It sets the storage format on the metadata but always writes the table using RCBINARY. This will be implemented in a future release.

General Changes

  • Fix hang in verifier when an exception occurs.
  • Fix chr() function to work with Unicode code points instead of ASCII code points.
  • The JDBC driver no longer hangs the JVM on shutdown (all threads are daemon threads).
  • Fix incorrect parsing of function arguments.
  • The bytecode compiler now caches generated code for join and group byqueries, which should improve performance and CPU efficiency for these types of queries.
  • Improve planning performance for certain trivial queries over tables with lots of partitions.
  • Avoid creating large output pages. This should mitigate some cases of “Remote page is too large” errors.
  • The coordinator/worker communication layer is now fully asynchronous. Specifically, long-poll requests no longer tie up a thread on the worker. This makes heavily loaded clusters more efficient.