22.17. Release 0.141-t
Presto 0.141-t is equivalent to Presto release 0.141, with some additional features.
Add support for Prepared statements and parameters via sql syntax.
- DEALLOCATE PREPARE
- DESCRIBE INPUT
- DESCRIBE OUTPUT
Add support for running regular expression functions using the more efficent re2j-td library by setting the session
regex_library to RE2J. The memory footprint can be adjusted by setting
Additionally, the number of times the re2j library falls back from its DFA algorithm to the NFA algorithm (due to
hitting the states limit) before immediately starting with the NFA algorithm can be set with the
Add support for Presto to query from a Kerberized Hadoop cluster. The Hive connector provides additonal security options to support Hadoop clusters that have been configured to use Kerberos. When accessing HDFS, Presto can impersonate the end user who is running the query. This can be used with HDFS permissions and ACLs to provide additional security for data.
Execute the statement and show the distributed execution plan of the statement along with the cost of each operation.
DECIMAL is a fixed precision data type that has been added.
Some functionality from Presto 0.141 may work but is not officially supported by Teradata.
- The installation method as documented on prestodb.io.
- Web Connector for Tableau
- The following connectors:
- Developing Plugins
Decimal support is currently in Beta stage.
- The SQL keyword
endis used as a column name in
system.runtime.queries, so in order to query from that column,
endmust be wrapped in quotes
NATURAL JOINis not supported
- Correlated subqueries are not supported
- Non-equi joins are only supported for inner join – e.g.
"n_name" < "p_name"
INTERSECTare not supported
GROUPING SETSare not supported
OFFSETare not supported
Hive Connector Limitations
Teradata supports data stored in the following formats:
- Text files
Hive to Presto data type mapping
Presto does not map Hive data types 1-to-1:
- All integral types are mapped to
DOUBLEare mapped to
VARCHARare mapped to
These mappings may be visible if column values are passed to Hive UDFs or through slight differences in mathematical operations.
FLOAT values are mapped to
DOUBLE, the user may see unexpected results.
For example, a Hive data file containing a
FLOAT column with value
be presented to the user as a double whose string representation is
Presto supports a granularity of milliseconds for the
TIMESTAMP datatype, while Hive
TIMESTAMP values in tables are parsed according to the server’s timezone. If this is not what you want, you must
start Presto in the UTC timezone. To do this, set the JVM timezone to UTC:
-Duser.timezone=UTC and also add the
following property in the Hive connector properties file:
Presto’s method for declaring timestamps with/with out timezone is not sql standard. In Presto, both are declared using
TIMESTAMP '2003-12-10 10:32:02.1212' or
TIMESTAMP '2003-12-10 10:32:02.1212 UTC'.
The timestamp is determined to be with or without timezone depending on whether you include a time zone at the end of
the timestamp. In other systems, timestamps are explicitly declared as
TIMESTAMP WITH TIME ZONE or
TIMESTAMP WITHOUT TIME ZONE (with
TIMESTAMP being an alias for one of them). In these systems, if you declare a
TIMESTAMP WITHOUT TIMEZONE, and your string has a timezone at the end, it is silently ignored. If you declare a
TIMESTAMP WITH TIME ZONE and no time zone is included, the string is interpreted in the user time zone.
INSERT INTO ... VALUES limitations
The data types must be exact, i.e. must use
cast('2015-1-1' as date) for
date, and you must supply a value for every column.
INSERT INTO ... SELECT limitations
INSERT INTO creates unreadable data (unreadable both by Hive and Presto) if a Hive table has a schema for which Presto only interprets some of the columns (e.g. due to unsupported data types). This is because the generated file on HDFS will not match the Hive table schema.
If called through JDBC, executeUpdate does not return the count of rows inserted.
Hive Parquet Issues
PARQUET support in Hive imposes more limitations than the other file types.
BINARYdatatypes are not supported
- Although``FLOAT`` column was mapped to
DOUBLEthrough Presto the value for
123.345was exposed as
DOUBLE 123.345in Presto.
PostgreSQL and MySQL Connectors Limitations
describe table reports
Table has no supported column types inappropriately.
Presto connects to MySQL and PostgreSQL using the credentials specified in the properties file. The credentials are used to authenticate the users while establishing the connection. Presto runs queries as the “presto” service user and does not pass down user information to MySQL or PostgreSQL connectors.
PostgreSQL and MySQL each support a wide variety of datatypes (PostgreSQL datatypes, MySQL datatypes). Many of these
types are not supported in Presto. Table columns that are defined using an unsupported type are not visible to Presto
users. These columns are not shown when
describe table or
select * SQL statements are executed.
CREATE TABLE (...) does not work, but
CREATE TABLE AS SELECT does.
INSERT INTO is not supported
DROP TABLE is not supported.
Limited SQL push-down
Presto does not “push-down” aggregate calculations to PostgreSQL or MySQL. This means that when a user executes a
simple query such as
SELECT COUNT(*) FROM lineitem the entire table will be retrieved and the aggregate calculated
by Presto. If the table is large or the network slow, this may take a very long time.
MySQL catalog names are mapped to Presto schema names.
Teradata JDBC Driver
The Teradata JDBC driver does not support batch queries.