转载

Apache Sqoop 1.4.6 发布,Hadoop 数据迁移

5月23日 西安 OSC 源创会开始报名啦,存储、虚拟机、Docker 等干货分享

Apache Sqoop 1.4.6 发布,这是 Apache Sqoop 地五个版本,现已提供下载:

http://www.apache.org/dyn/closer.cgi/sqoop/

更新内容如下:

Bug 修复

  • [ SQOOP-1125 ] - Out of memory errors when number of records to import < 0.5 * splitSize

  • [ SQOOP-1368 ] - the configuration properties are reset in HBaseImportJob

  • [ SQOOP-1387 ] - Incorrect permissions on manager.d directory can lead to NPE

  • [ SQOOP-1400 ] - Failed to import data using mysql-connector-java-5.1.17.jar

  • [ SQOOP-1411 ] - The number of tasks is not set properly in PGBulkloadExportManager

  • [ SQOOP-1412 ] - Text splitter should also consider NCHAR and NVARCHAR fields

  • [ SQOOP-1422 ] - Integration tests for Oracle connector fail as not using direct option

  • [ SQOOP-1423 ] - hcatalog export with --map-column-java fails

  • [ SQOOP-1429 ] - Fix native characters usage for SqlServer object names

  • [ SQOOP-1433 ] - Oracle direct connector fails with Oracle 12c JDBC driver

  • [ SQOOP-1437 ] - 'Text' reserved word in compliation

  • [ SQOOP-1472 ] - Use Properties.load() method to load property files under conf/manager.d

  • [ SQOOP-1474 ] - Fix usage of StringUtils.repeat method

  • [ SQOOP-1490 ] - Connectors documentation doesn't build on CentOS 5

  • [ SQOOP-1494 ] - Fix generateArvoSchema in DataDrivenImportJob

  • [ SQOOP-1519 ] - Enable HCat/HBase/Accumulo operations with OraOop connection manager

  • [ SQOOP-1520 ] - The table is null when using import into hive as parquet file and query option

  • [ SQOOP-1524 ] - Error to import into hive as textfile on hive 0.13.0

  • [ SQOOP-1539 ] - Empty partition keys and values in multiple partition key hcatalog usage should be validated

  • [ SQOOP-1540 ] - Accumulo unit tests fail with Accumulo 1.6.1 because of conflicts in libthrift libraries

  • [ SQOOP-1617 ] - MySQL fetch-size behavior changed with SQOOP-1400

  • [ SQOOP-1627 ] - Fix Hadoop100  and Hadoop20 profile

  • [ SQOOP-1631 ] - Drop confusing use of --clean-staging-table parameter from PGBulkloadManager

  • [ SQOOP-1663 ] - OraOop test cases are not logging any output

  • [ SQOOP-1682 ] - Test cases *LobAvroImportTest are failing

  • [ SQOOP-1684 ] - Use pre-existing HBase delegation token

  • [ SQOOP-1685 ] - HCatalog integration is not working on JDK8

  • [ SQOOP-1759 ] - TestIncrementalImport fails with NPE on Windows

  • [ SQOOP-1764 ] - Numeric Overflow when getting extent map

  • [ SQOOP-1779 ] - Add support for --hive-database when importing Parquet files into Hive

  • [ SQOOP-1826 ] - NPE in ImportTool.lastModifiedMerge during postgres import

  • [ SQOOP-1890 ] - Properly escape table name in generated queries

  • [ SQOOP-1970 ] - Add warning about trailing whitespace characters when using password file to User guide

  • [ SQOOP-2017 ] - Print out loaded columns and their type in verbose mode

  • [ SQOOP-2024 ] - Hive import doesn't remove target directory in hive

  • [ SQOOP-2055 ] - Run only one map task attempt during export

  • [ SQOOP-2057 ] - Skip delegation token generation flag during hbase import

  • [ SQOOP-2128 ] - HBaseImportJob should close connection from HBaseAdmin to HBase

  • [ SQOOP-2130 ] - BaseSqoopTestCase should use manager.escapeTable instead of directly hardcoding double quotes

  • [ SQOOP-2132 ] - Remove test TestMainframeImportTool.testPrintHelp

  • [ SQOOP-2136 ] - Test case SqlServerUpsertOutputFormatTest is failing

  • [ SQOOP-2137 ] - Sqoop tests and documentation refer to as-avrofile (instead of as-avrodatafile)

  • [ SQOOP-2145 ] - Default Hive home is not being set properly under certain circumstances

  • [ SQOOP-2164 ] - Enhance the Netezza Connector for Sqoop

  • [ SQOOP-2170 ] - MySQL specific tests are not properly cleaning up created tables

  • [ SQOOP-2191 ] - Provide an option automatically choose one mapper when neither primary key is defined nor split by column is provided

  • [ SQOOP-2254 ] - Failed to build release notes

  • [ SQOOP-2257 ] - Parquet target for imports with Hive overwrite option does not work

  • [ SQOOP-2263 ] - Sqoop1 has some files without a copyright header

  • [ SQOOP-2264 ] - Exclude and remove SqoopUserGuide.xml from git repository

  • [ SQOOP-2281 ] - Set overwrite on kite dataset

  • [ SQOOP-2282 ] - Add validation check for --hive-import and --append

  • [ SQOOP-2283 ] - Support usage of --exec and --password-alias

  • [ SQOOP-2286 ] - Ensure Sqoop generates valid avro column names

  • [ SQOOP-2290 ] - java.lang.ArrayIndexOutOfBoundsException thrown when malformed column mapping is provided

  • [ SQOOP-2294 ] - Change to Avro schema name breaks some use cases

  • [ SQOOP-2324 ] - Remove extra license handling for consistency

改进

  • [ SQOOP-1330 ] - Ignore blank newlines in managers.d property files

  • [ SQOOP-1391 ] - Compression codec handling

  • [ SQOOP-1392 ] - Create the temporary directory inside task working dir rather then in tmp

  • [ SQOOP-1421 ] - Automated patch script

  • [ SQOOP-1471 ] - Use Hadoop CredentialProvider API to encyrpt passwords at rest

  • [ SQOOP-1489 ] - Propagate cubrid properties to the test VM

  • [ SQOOP-1567 ] - Auto-Configure JTDS Driver From JDBCUrl

  • [ SQOOP-1622 ] - Copying from staging table should be in single transaction for pg_bulkload connector

  • [ SQOOP-1632 ] - Add support for index organized tables to direct connector

  • [ SQOOP-2149 ] - Update Kite dependency to 1.0.0

  • [ SQOOP-2252 ] - Add default to Avro Schema

新特性

  • [ SQOOP-1272 ] - Support importing mainframe sequential datasets

  • [ SQOOP-1309 ] - Expand the Sqoop to support CUBRID database.

  • [ SQOOP-1366 ] - Propose to add Parquet support

  • [ SQOOP-1403 ] - Upsert export for SQL Server

  • [ SQOOP-1405 ] - Add arg to enable SQL Server identity insert on export

  • [ SQOOP-1450 ] - Copy the avro schema file to hdfs for AVRO based import

更多更新内容请看:

https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12327469&projectId=12311320

Apache Sqoop 1.4.6 发布,Hadoop 数据迁移

Sqoop是一个用来将Hadoop和关系型数据库中的数据相互转移的工具,可以将一个关系型数据库(例如 : MySQL ,Oracle ,Postgres等)中的数据导入到Hadoop的HDFS中,也可以将HDFS的数据导入到关系型数据库中。

正文到此结束
Loading...