网站备案 2016沧州网站seo公司
1、启动远程服务端:hive --service metastore启动(这里是阻塞式),然后在客户端操作
2、Hive DDL(数据库定义语言)
--展示所有数据库show databases;
--切换数据库use database_name;
3、创建语法(建议拷贝到notepad++中,查看类型时比较方便有高亮)
/*创建表的操作基本语法:CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name -- (Note: TEMPORARY available in Hive 0.14.0 and later)[(col_name data_type [COMMENT col_comment], ... [constraint_specification])][COMMENT table_comment][PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)][CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS][SKEWED BY (col_name, col_name, ...) -- (Note: Available in Hive 0.10.0 and later)]ON ((col_value, col_value, ...), (col_value, col_value, ...), ...)[STORED AS DIRECTORIES][[ROW FORMAT row_format] [STORED AS file_format]| STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)] -- (Note: Available in Hive 0.6.0 and later)][LOCATION hdfs_path][TBLPROPERTIES (property_name=property_value, ...)] -- (Note: Available in Hive 0.6.0 and later)[AS select_statement]; -- (Note: Available in Hive 0.5.0 and later; not supported for external tables)CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_nameLIKE existing_table_or_view_name[LOCATION hdfs_path];复杂数据类型data_type: primitive_type| array_type| map_type| struct_type| union_type -- (Note: Available in Hive 0.7.0 and later)基本数据类型primitive_type: TINYINT| SMALLINT| INT| BIGINT| BOOLEAN| FLOAT| DOUBLE| DOUBLE PRECISION -- (Note: Available in Hive 2.2.0 and later)| STRING| BINARY -- (Note: Available in Hive 0.8.0 and later)| TIMESTAMP -- (Note: Available in Hive 0.8.0 and later)| DECIMAL -- (Note: Available in Hive 0.11.0 and later)| DECIMAL(precision, scale) -- (Note: Available in Hive 0.13.0 and later)| DATE -- (Note: Available in Hive 0.12.0 and later)| VARCHAR -- (Note: Available in Hive 0.12.0 and later)| CHAR -- (Note: Available in Hive 0.13.0 and later)array_type: ARRAY < data_type >map_type: MAP < primitive_type, data_type >struct_type: STRUCT < col_name : data_type [COMMENT col_comment], ...>union_type: UNIONTYPE < data_type, data_type, ... > -- (Note: Available in Hive 0.7.0 and later)行格式规范row_format: DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char][MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char][NULL DEFINED AS char] -- (Note: Available in Hive 0.13 and later)| SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, property_name=property_value, ...)]文件基本类型file_format:: SEQUENCEFILE| TEXTFILE -- (Default, depending on hive.default.fileformat configuration)| RCFILE -- (Note: Available in Hive 0.6.0 and later)| ORC -- (Note: Available in Hive 0.11.0 and later)| PARQUET -- (Note: Available in Hive 0.13.0 and later)| AVRO -- (Note: Available in Hive 0.14.0 and later)| JSONFILE -- (Note: Available in Hive 4.0.0 and later)| INPUTFORMAT input_format_classname OUTPUTFORMAT output_format_classname表约束constraint_specification:: [, PRIMARY KEY (col_name, ...) DISABLE NOVALIDATE ][, CONSTRAINT constraint_name FOREIGN KEY (col_name, ...) REFERENCES table_name(col_name, ...) DISABLE NOVALIDATE
*/
--创建普通hive表(不包含行定义格式)create table psn(id int,name string,likes array<string>,address map<string,string>)
--创建自定义行格式的hive表create table psn2(id int,name string,likes array<string>,address map<string,string>)row format delimitedfields terminated by ','collection items terminated by '-'map keys terminated by ':';
--创建hive的外部表(需要添加external和location的关键字)
create external table psn4(id int,name string,likes array<string>,address map<string,string>)row format delimitedfields terminated by ','collection items terminated by '-'map keys terminated by ':'location '/data';-- 在之前创建的表都属于hive的内部表(psn,psn2,psn3),而psn4属于hive的外部表,内部表跟外部表的区别:1、hive内部表创建的时候数据存储在hive的默认存储目录中,外部表在创建的时候需要制定额外的目录2、hive内部表删除的时候,会将元数据和数据都删除,而外部表只会删除元数据,不会删除数据
--应用场景:内部表:需要先创建表,然后向表中添加数据,适合做中间表的存储外部表:可以先创建表,再添加数据,也可以先有数据,再创建表,本质上是将hdfs的某一个目录的数据跟,hive的表关联映射起来,因此适合原始数据的存储,不会因为误操作将数据给删除掉
--创建多分区表create table psn6(id int,name string,likes array<string>,address map<string,string>)partitioned by(gender string,age int)row format delimitedfields terminated by ','collection items terminated by '-'map keys terminated by ':'; 注意:1、当创建完分区表之后,在保存数据的时候,会在hdfs目录中看到分区列会成为一个目录,以多级目录的形式 存在2、当创建多分区表之后,插入数据的时候不可以只添加一个分区列,需要将所有的分区列都添加值3、多分区表在添加分区列的值得时候,与顺序无关,与分区表的分区列的名称相关,按照名称就行匹配
加载数据文件到某一张表中
语法:
LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)]
LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)] [INPUTFORMAT 'inputformat' SERDE 'serde'] (3.0 or later)
加载本地数据到hive表
load data local inpath '/root/data/data' into table psn;--(/root/data/data指的是本地linux目录)
加载hdfs数据文件到hive表
load data inpath '/data/data' into table psn;--(/data/data指的是hdfs的目录)
注意:
1、load操作不会对数据做任何的转换和修改操作
2、从本地linux load local数据文件是复制文件到对应表目录下
3、从hdfs load数据文件是移动文件到对应表目录下
4、load操作也支持向分区表中load数据,只不过需要添加分区列的值