OpentsDB时序数据库

环境搭建

- 首先确保已经安装了HBase集群
- OpentsDB本身没有集群解决方案,而是以HBase集群为基础实现分布式集群方案。多个节点的OpentsDB访问同一个HBase集群实现。
  1. 安装gnuplot支持,automake等依赖
yum install gnuplot automake autoconf git -y
  1. 下载opentsDB

  2. 安装

    cd /kfly/install/opentsdb-2.2.0/
    ./build.sh 
    
    cd build/
    make install
    
  3. 启动opentsDB钱准备

    # 第一次启动首先要进行hbase准备,执行初始化脚本
    env COMPRESSION=NONE HBASE_HOME=/kfly/install/hbase-0.94.27 ./src/create_table.sh 
    
  4. 配置文件 opensdb.conf

    # --------- NETWORK ----------
    # # The TCP port TSD should use for communications
    # # *** REQUIRED ***
     tsd.network.port = 4399
    #
    # # ----------- HTTP -----------
    # # The location of static files for the HTTP GUI interface.
    # # *** REQUIRED ***
    tsd.http.staticroot = ./staticroot
    #
    # # Where TSD should write it's cache files to
    # # *** REQUIRED ***
    # tsd.http.cachedir = /root/openTSDB_temp
    #
    #
    # # --------- CORE ----------
    # # Whether or not to automatically create UIDs for new metric types, default
    # # is False
     tsd.core.auto_create_metrics = true
    #
    # # Name of the HBase table where data points are stored, default is "tsdb"
     tsd.storage.hbase.data_table = tsdb
    #
    # # Path under which the znode for the -ROOT- region is located, default is "/hbase"
    tsd.storage.hbase.zk_basedir = /hbase
    #
    # # A comma separated list of Zookeeper hosts to connect to, with or without
    # # port specifiers, default is "localhost"
    tsd.storage.hbase.zk_quorum = server4,server5,server6
    
  5. 启动

    ./build/tsdb tsd
    
  6. 日志管理

    • 一般由于opentsdb默认的日志特别多,尤其以nohup启动的话,日志很有可能占满整个磁盘。所以这里需要修改opentsdb的bug级别。
    <!-- Per class logger levels -->
    <logger name="QueryLog" level="OFF" additivity="false">
      <appender-ref ref="QUERY_LOG"/>
    </logger>
    <!--为减少日志输出,日志级别OFF修改为ERROR-->
      <!-- Per class logger levels -->
    <logger name="QueryLog" level="ERROR" additivity="false">
      <appender-ref ref="QUERY_LOG"/>
    </logger>
    

Java Api

写入数据 详细

HttpClientImpl client = new HttpClientImpl("http://node03:4399");
MetricBuilder builder = MetricBuilder.getInstance();
builder.addMetric("metric1").setDataPoint(30L)
  .addTag("tag1", "tab1value").addTag("tag2", "tab2value");
builder.addMetric("metric2").setDataPoint(232.34)
  .addTag("tag3", "tab3value");
try {
  Response response = client.pushMetrics(builder,
                                         ExpectResponse.SUMMARY);
  System.out.println(response);
} catch (IOException e) {
  e.printStackTrace();
}

查询数据 详细

HttpClientImpl client = new HttpClientImpl("http://node03:4399");
QueryBuilder builder = QueryBuilder.getInstance();
SubQueries subQueries = new SubQueries();
String zimsum = Aggregator.zimsum.toString();
subQueries.addMetric("top.kfly.host2").addTag("tag", "value");
subQueries.addAggregator(zimsum);
long now = new Date().getTime() / 1000;
builder.getQuery().addStart(126358720).addEnd(now).addSubQuery(subQueries);
SimpleHttpResponse response = client.pushQueries(builder,ExpectResponse.STATUS_CODE);
String content = response.getContent();
System.out.println(content);
int statusCode = response.getStatusCode();
if (statusCode == 200) {
  JSONArray jsonArray = JSON.parseArray(content);
  System.out.println(jsonArray.toJSONString());
}

详细教程参见官网